Warning message

Log in to add comments.

Red Hat Insights Blog

Latest Posts

  • Insight into 0-days

    Authored by: Stanislav Kontar

    Security-based Red Hat Insights rules attempt to analyze and detect issues that impact the security of your systems in different ways:

    • Detect high profile, high priority, and 0-day vulnerabilities
    • Detect misconfigurations of your software which may impact security
    • Detect other issues that could have security implications, such as expired certificates

    The Red Hat Product Security team works closely with the Red Hat Insights team to provide current, updated, and helpful content for these security rules. In this blog, we’ll focus on the first category of the rules, which are targeted at high profile vulnerabilities and the associated background work we do.

    Customer Security Awareness

    The Red Hat Product Security team continuously analyzes security vulnerabilities that affect Red Hat products. Some security flaws are recognized to be of especially great concern, or are expected to generate significant media attention. These issues might be branded (with a name, logo, website), are actively used in exploits “in the wild,” or are a severe problem in core packages or in the functionality of Red Hat products.

    When such a high-priority vulnerability is presented, the Red Hat Product Security team starts a process known as Customer Security Awareness (CSAw). CSAw issues are frequently kept secret (embargoed) for a period of time so that proper fixes can be developed and prepared for release by those involved, before the vulnerability is publicly disclosed. During an embargo period, the Red Hat Product Security team and engineering team(s) work with upstream package maintainers, security researchers, and security teams of other Linux vendors with the goal of creating an optimal fix. The objective is to allow stakeholders to have simultaneous access to the fix(es) so that as many end users as possible have access to it before a potential attacker gets details on how the vulnerability can be exploited.

    In addition to analyzing the vulnerability in-depth, making sure we have well-tested fixes available, creating an article to explain the issue, etc., the Red Hat Product Security and Red Hat Insights product teams start another process to make sure that Red Hat Insights brings a high level of value for security-conscious customers:

    Preparation work

    It is often a race against the clock to develop detections and remediations for a vulnerability before the exploit goes public. We treat the data and content around these issues with the highest priority, knowing that a significant number of customers depend on it. We must keep the information confidential, but we also have to cooperate with various internal parties and subject matter experts to create a response. To that end, we create a private repository, with access restricted to peers on a “need to know” basis. Team coordination is critical.

    Here are three recent examples of CSAw issues:

    These issues utilized different engineering and testing groups. Each issue had unique technical nuances and potential impacts that had to be evaluated and corrected … all simultaneously. Often times we’ll have up to four developers collaborating on the final solution. With all of these moving parts, good lines of communication between collaborators are essential.

    Identification of vulnerable packages

    Once we have the shared, embargoed workspace setup, we collaborate on the vulnerability, with each engineer bringing their unique experience and expertise to bear. The first step is to identify which of our packages contains the vulnerable software and create a list of vulnerable packages that were released in various channels.

    The breadth of the Red Hat Insights coverage goes beyond our initial analysis that kicked off the CSAw event. Red Hat Insights coverage includes outdated/out-of-support versions of the packages, and the solution(s) must support those older versions.

    Like most major Linux distributions, Red Hat Enterprise Linux uses a process called backporting whereby security fixes are applied to existing stable versions instead of only using new, upstream packages that might introduce new features, changes, or unexpected behavior with the package. If you are not familiar with this concept, you can read Determining your risk, which discusses why commercial security scanners are often wrong when it comes to our products.

    Because of our backporting policy, we do not rely on version comparison. We must be much more precise, so we create a tree of vulnerable package versions. If a system analyzed by Red Hat Insights uses a vulnerable package version, we flag it as such.

    Rule development

    Close cooperation within the Red Hat Product Security team is our next step, especially between the engineers working on the Insights rule and a group of analysts who analyze the vulnerability. In the race against the clock to develop the Insights rule, we communicate with the technical analysts and package maintainers searching for updates on available information. The Insights rule development for CSAw is a parallel and iterative process.

    The Red Hat Insights client can be configured to frequently gather the required data (such as installed packages, running services, or software configurations) so we can start analyzing the scope of impact. The Red Hat Product Security engineers who are working on the Insights rule develop several artifacts: rule server back-end logic, rule web UI front-end content, detection scripts, and Ansible Playbooks.

    The goal is to provide the Insights rule components, scripts, and playbooks to the customers, as quickly as possible.

    This is a complex task that can be interrupted many times and requires additional considerations and changes as new information emerges. To bring order and prevent missed steps, we utilize a set of extensive checklists that we update and improve based on our experience with them. These checklists assist us in delivering functionality that is up to our high quality standards, and ensures that important details are not overlooked.

    We have one checklist for making sure that we follow a specific timeline so we can act quickly and correctly if the vulnerability goes public earlier than expected. We have another checklist with all tasks to make sure that any team member can see what is the state of work and cover for emergencies. And yet another checklist helps us communicate with analysts and make sure we are consistent when using their analyses.

    Categorization

    One of the main questions we ask ourselves when developing a rule is, “Can we break out all of the affected systems into more categories?”. This is important because we want to build in flexibility and enable our customers to take a risk-based approach to remediation. Customers can quickly recognize systems they should act on first, if they have limited resources.

    A good example of this categorization is a rule for another CSAw vulnerability, DROWN - Cross-protocol attack on TLS using SSLv2 (CVE-2016-0800), with eight different categorizations. This enables customers to easily evaluate the scale of impact on various systems – from systems that have the vulnerable package merely installed, to ones that are running software listening on externally accessible ports while using an old version of OpenSSL, making the system vulnerable to very effective exploitation.

    This real-world advice helps our customers prioritize their efforts and initially focus on areas where they have the most exposure, allowing them to choose to defer remediation of lower risk systems until after the greatest risks are remediated.

    Mitigation

    Most vulnerabilities can be fixed by applying an update to affected packages. Ultimately, this is the best solution since it corrects the vulnerability. However, sometimes that is not desirable, or the fix might not be available immediately. Scheduling downtime for critical systems can sometimes be time-consuming, complex, or not feasible at that time.

    If analysts are aware of effective mitigations of the issue – like using SELinux, changing software configurations or settings, running a short script, etc. – and the Red Hat Insights framework is able to detect those when they are used, the rule will also propose them. Having options for mitigations allows customers to better plan how they want to react without having to worry about exposure, and deploy the final fix at a time of their choosing.

    In some cases, when we agree that the mitigation is as good as a fix, the Insights rule will disappear from the Red Hat Insights notifications list. If we deem it to be a good, but temporary solution, its severity is toned down.

    Showtime!

    Before a CSAw vulnerability becomes public, we review our rule against real production data. The rule is still embargoed and is accessible to as few people as necessary, even within Red Hat. Because of this, the rule back-end is uploaded to the production server out-of-band. It is moved from one private space to another as the access to the production server is also limited.

    Everything is double-checked and tested as a whole. If testing goes as planned, the rule is ready when the CSAw vulnerability goes public. We share much of our work through our Red Hat Product Security Center articles and the Red Hat Insights service itself. Simultaneously, the vulnerability is announced on the mailing lists and to the public, a vulnerability article is published on the Customer Portal, fixes are made available in the repositories, and the rule status is changed to “active”.

    And at that point, the long days of work by many tireless people become available to Red Hat Insights customers – so you can be aware of the things that matter the most at the moment.


    Red Hat Security Blog

    Posted: 2017-09-18T12:53:46+00:00
  • Ansible and Insights Part 3 - Setting up Ansible Tower for Insights automated remediation

    Authored by: Will Nix

    For our final Ansible and Insights release blog, we will finish this three part series by showing you how to enable Tower to talk with the Insights API to enable jobs for site wide remediation. This builds on our previous blog post, Ansible and Insights Part 2 - Automating Ansible Core remediation, so if you have do not have the pre-requisites mentioned in Part 2, you should verify you have met those requirements and can build a Planner plan within Insights before trying to follow along.

    Prerequisites for being able to utilise Ansible functionality with Insights are:
    - Active RHEL subscription
    - Active Insights evaluation or entitlement
    - RHEL 7 or RHEL 6.4 and later
    - Ansible Tower 3.1.2 for examples in this blog post
    - Insights systems registered and reporting with an identifiable problem
    - Ability to manage systems via Ansible with Insights system hostname or "display name" as the hostname in your ansible inventory in Ansible Tower.
    - Ability to store credentials, projects, and create template's within Ansible Tower (administrator account is used in these examples).
    - Your Red Hat Customer portal username/password (the same one you use to login to Red Hat Insights on the Customer Portal.

    Creating a plan for your remediation in Red Hat Insights
    Similar to building out a remediation plan for use with Ansible, you can create a plan for Ansible Tower with the same procedure. We outline this process in Ansible and Insights Part 2 - Automating Ansible Core remediation and the only minor difference is that once you save the plan it is able to be sync'd to your Tower. You do not have to download the playbook, although you can download and modify it for manual or process based (git, etc.) import into your Tower.

    Since we have our example plans built from our previous blog, we need to now setup Tower to interface with the Red Hat Insights API. This only takes a few minor one time setup steps and a minimal amount of time.

    Setting up Insights Credentials
    - Login to your Tower and click the Settings icon to enter the Settings menu.
    - Click Credentials to access the Credentials page.
    - Click the add button located in the upper right corner of the Credentials screen.
    - Enter the name of the credential to be used in the Name field. For example, "Red Hat Insights credentials"
    - In the Type drop down menu list, select Source Control.
    - Enter your Red Hat Customer Portal credentials associated with your Insights deployment in the Username and Password fields.
    - Click Save when done.

    Setting up Insights inventory
    Insights will include a hosts: line that contains the hostname that Insights itself knows about which may be different than the hostname that Tower knows about. Therefore, make sure that those hostnames match up with what Tower has in its inventory by comparing the systems in the Red Hat Insights Portal to the systems listed in the Tower Inventory.

    To create a new inventory for use with Insights:
    - Click the Inventory main link to access the Inventories page.
    - Click the Add button, which launches the New Inventory window.
    Enter the name and organization to be used in their respective fields.
    - Click Save to proceed to the Groups and Hosts Management screen.
    In the Hosts (right side) of the Inventory display screen, click the Add Host button to open the Create Host dialog.
    Enter the name in the Host Name field associated with the Insights host that will be used and click Save.

    Now that we have our inventory to manage added or imported into Tower we should setup the Insights Planner sync project. Every time this project runs it will sync the available Planner plans between Insights and your Tower.

    Setting up Insights project
    In your Tower click the Projects main link to access the Projects page.
    - Click the Add button, which launches the New Project window.
    Enter the appropriate details into the required fields, at minimum.
    SCM Type should be “Red Hat Insights”
    Upon selecting the SCM type, the Source Details field expands. Enter the name of the credential you created in the previous step for use in this project in the text field provided, or click the search button to look up and select the name
    - Click to select the update option for this project, and provide any additional values, if applicable. For information about each option, click the help button next to the options.

    Creating an Insights job template
    Now that we have our credentials, inventory, and projects added, we will create a job template to run an Insights playbook. You can do this for any large scale job you would want to enable across the enterprise or for groups within your enterprise.
    - Click the Templates main link to access the Templates page.
    - Click the Add button and select Job Template from the drop-down menu list, which launches the New Job Template window.
    Enter the appropriate details into the required fields, at minimum. Note the following fields requiring specific Insights-related entries:
    In the Inventory field, enter (or choose from lookup) the name of the inventory you created with the appropriate hostnames used by Insights.
    In the Project field, enter (or choose from lookup) the name of the Insights project to be used with this job template.
    In the Playbook drop-down menu list, choose the playbook to be launched with this job template from the available playbooks associated with the selected Insights project.
    For additional information about each field, click its corresponding Help button or refer to Job Templates for details.

    Complete the rest of the template with other attributes such as permissions, notifications, and surveys, as necessary. When done completing the job template, select Save.

    - To launch the job template, click the Launch button (under Actions).
    Once complete, the playbook's job results display in the Job Details page.

    For the examples we have been following with this blog, you should use a similar Planner plan from our previous blog post, such as fixing a few systems in the Payload Injection Fix plan within Insights.

    For that use case you would select the so-named "Payload Injection fix" from the Playbook dropdown and the remediation that we applied in the previous blog post can also be applied to machines from Tower.

    You can then go back to the Insights Planner and see that the Payload Injection Fix Insights plan has remediated the selected systems via Tower.

    Final note
    We hope this helps you see how powerful the Insights and Tower integration is becoming, giving operations teams the ability to scale guided remediation out to the entire enterprise. Please let us know how we’re doing with Insights integrations and the service by emailing us at insights@redhat.com or by using the Provide Feedback button at the top of every Insights Customer Portal page.

    Stay tuned for more in depth and continued Red Hat Insights integrations into the Red Hat Management portfolio and other Red Hat software, and if you're interested in utilising these technologies in your own enterprise you can get started with an evaluation here.

    Posted: 2017-08-29T15:14:52+00:00
  • June 2017 service release: New and improved Red Hat Insights features and functionality

    Authored by: Will Nix

    The Red Hat Insights team is pleased to highlight our first post-Summit 2017 service release for functionality and feature enhancement.

    Red Hat Insights is a Software-as-a-Service (SaaS) that potentially prevents downtime by enabling customers to proactively monitor for infrastructure risks and critical security alerts detected in their environments, while requiring no added infrastructure. Insights offers automated remediation capabilities via Ansible Playbooks, as well as Executive Reporting features and Health Scoring, and recommends guidance on how to quickly and securely fix identified issues.

    Our June 2017 release brings several new features to the Customer Portal Insights Web UI that are currently available for production environments, and beta features that are offered for testing and feedback in Insights Beta.

    Read below for more informatin or go check them out and let us know your thoughts by using the "Provide Feedback" button.

    For more information about the latest Insight release, refer to our Red Hat Insight Release Notes.

    Newest features:

    Incident Detection [Beta Release Pending]

    Detecting "Incidents" within an infrastructure is a new concept added to Red Hat Insights. Previously, Insights would proactively detect issues you were at risk of encountering in the future and identify them early so they could be acted upon before they're encountered. This core functionality still exists; however, the Insights engine has been expanded to now detect critical issues we know are currently impacting your infrastructure at the time of analysis. By highlighting these incidents differently within the UI, we aim to direct immediate attention and prioritize these incidents to be addressed quickly, preventing further or impending disruption.

    Insights Analysis of Openshift Infrastructure [Beta Release Pending]

    Expands the capabilities of Insights to provide analysis of Openshift infrastructures (Master & Nodes).

    Global Group Filtering [Beta & Stable]

    Global group filters are now located throughout the UI, on almost all pages. These filters allow for modified views to only show the results within a selected group. The selected filter will remain with you as you navigate through Insights, until you reset or select another group.

    Additional Page Filtering Capabilities [Beta & Stable]

    Additional filtering capabilities have been added to Actions and Inventory views. Results can now be filtered by System Status (Checking-In or Stale), System Health (Affected or Healthy), and Incidents. Filtering is now designed to provide a consistent user experience no matter what page within Insights is being used.

    Red Hat Insights Blog Subscription [Beta & Stable]

    In an effort to keep users up to date with the latest news regarding Red Hat Insights, users are now automatically subscribed to the Red Hat Insights blog. New blog posts are submitted as new rules or features are added to Insights. Users can manage their subscriptions to this blog.

    Red Hat’s Status Page Integration [Stable]

    Integration with the Red Hat Status Page (status.redhat.com) has been completed and now provides up-to-date status of Red Hat Insights availability. The status page is used to communicate current outages, known availability issues or upcoming maintenance windows of Red Hat Insights stable, beta, and API.

    Automatic Stale System Removal [Beta & Stable]

    Automatic removal of stale systems helps users focus on the most up-to-date critical actions in their infrastructure, without the noise of older stale systems. A “stale” system is a system that is no longer checking-in with the Insights service daily, as expected. Once identified, the UI will highlight this system for action to be taken. After one month has passed with a stale status, the system will automatically be removed from Insights views.

    Executive Reporting Enhancements [Beta]

    Executive reporting was added in the April 2017 update of Red Hat Insights, providing users with views of historical trends and snapshots of infrastructure health. We have received multiple requests to enhance the reporting and have added the following features:

    • Progress tracking and reporting on the number of issues resolved over the past 30 days.
    • Appendix of all rule hits provides a quick report of issues identified by Insights within an account infrastructure, and the number of impacted systems.
    • Overall Score improvements, on hover-over, provide additional details of what the score means and how it’s calculated. Additionally, the score color is modified based on the health of all systems.
    • Export to PDF allows users to save and share their complete executive report. [Coming Soon]

    Planner and Ansible Playbook Generation Improvements [Beta]

    The Planner and playbook-builder UI has been improved to allow for more flexibility when adding to existing plans or creating new plans. Systems can now be added to previously specified groups, as individual systems, or all systems. Actions available to add are now displayed in intelligent views to allow for easier and quicker selection.

    ** The Insights team thanks all those who helped beta test. We're always hard at work adding new features and functionality. Let us know how we can continue to improve Insights.**

    Posted: 2017-06-07T17:04:51+00:00
  • Ansible and Insights Part 2 - Automating Ansible Core remediation

    Authored by: Will Nix

    As we discussed in our previous blog post about enabling Ansible automation with Insights, we will look closer at taking findings from Insights and using the actionable intelligence provided to perform an automated remediation via Ansible playbook. Ansible Tower setup and remediation will be covered in an upcoming post.

    Currently you can generate playbooks for Insights and Tower via Red Hat's customer portal. An upcoming release of Satellite 6 will further integrate Insights automated remediation into Satellite by allowing you to generate playbooks from the Satellite UI.

    Prerequisites for being able to utilize Ansible functionality with Insights are:
    - Active RHEL subscription
    - Active Insights evaluation or entitlement
    - RHEL 7 or RHEL 6.4 and later
    - Ansible (or Ansible Tower) installed
    - Insights systems registered and reporting with an identifiable problem
    - Ability to manage systems via Ansible with Insights system hostname or "display name" as the hostname in your ansible inventory

    Begin by logging into the Insights interface on the customer portal at https://access.redhat.com/insights

    If you're already logged in, you'll be presented with the Insights Overview.

    From the Overview you can see quickly if you have any systems that have automated remediation identified. In the lower right of the console under Planner you will see "# issues can be resolved automatically by Ansible" or something similar. You can use this to quickly see all items you can remediate with Ansible.


    From here you have options. You can use Planner on the left nav menu to build a plan, you can click "Create a Plan/Playbook" from Overview, or you can use listed Actions (Actions -> Category) dropdowns for affected systems.

    In this example we will navigate to Actions -> Security, and choose the "Kernel vulnerable to man-in-the-middle payload injection". We see that several systems are affected by this risk, and it has a medium likelihood, a critical impact, and a high overall total risk. This Action is also Ansible enabled.

    Clicking into the Action itself gives us a description of the problem and a list of systems affected. From here we can create a playbook for the affected systems.

    I'll choose the three affected systems and use the Actions dropdown dialogue to Create a New Plan/Playbook.

    Give this plan a name (this is important; if you're using Tower integration this name is how we quickly identify the playbooks within Tower as well) and ensure the systems selected are correct. Click "Save" and the plan is created. From here you can delete or edit the plan to specify a maintenance window and duration, edit systems associated with this plan, or Generate Playbook and Export to CSV. We want to generate a playbook, so click that button.

    If the playbook you're building has options (like this example) you will be presented with a dialogue to decide what tasks you want to include in your Ansible playbook. Currently you may need to goto "Playbook Summary", like the graphic above, to modify the playbook options. Since the selected machines are critical to my environment, and I can't afford to take downtime to fix them with a kernel update and reboot, I'll use the active mitigation and "Set sysctl ipv4 challenge ack limit". This will allow me to actively mitigate the system and make it non-vulnerable. A more permanent fix would be to update the kernel, but if I'm sure nothing is going to change my sysctl variable back (config management tools may reverse these changes if not also updated), then I would be safe with this active mitigation.

    Click Save to confirm your selection and finalize playbook generation by Downloading Playbook.

    You can then use this downloaded Ansible playbook YML file to remediate the systems with: $ ansible-playbook $downloaded_filename.yml

    Filenames follow a scheme of plan_name-plan_number-unixtime.yml and contain information inside about which remediation systems and rule versions are being utilized.

    After watching the playbook run, assuming there are no errors you need to further investigate, refreshing Planner shows us 3/3 systems have been remediated.

    Upon refreshing the Planner interface we see that the remediations were performed successfully and these systems now have a check mark as their status.


    That's how simple it is to start using Ansible playbooks to remediate systems reporting risks. Stay tuned for another upcoming blog post on how to scale this to your entire infrastructure with Ansible Tower.

    Let us know your thoughts on the new features highlighted in our last post, in the comments on the blogs or with the Provide Feedback button inside of Insights!

    Thanks from all of us here at the Insights engineering and product teams, and happy remediating. Stay tuned for part 3, where we will be using Ansible Tower and Insights for enterprise remediation.
    -Will Nix

    Posted: 2017-06-01T15:55:58+00:00
  • Ansible and Insights Part 1 - Insights Automatic Remediation is Here

    Authored by: Stephen Adams

    Pairing Ansible and Insights may be the smartest thing since putting peanut butter and jelly together. With this partnership, we’ve enabled the ability for you to download playbooks from Insights to solve the problems in your infrastructure. With a few clicks, you can stop worrying, kick back, and bask in the glorious rays of automation.

    Our developers have done all the work of creating playbooks for you so that you don’t have to come up with them yourselves. We go through each rule in the Insights database, verify the steps, and create a playbook that deals with that exact problem on your systems. When you create an Insights Plan, all actions with available playbooks will have those playbooks automatically merged together into one single playbook. This makes it incredibly simple to fix many problems on all of your systems quickly and easily.

    If you already own Ansible Tower, you can easily take advantage of the Insights integration by selecting the plan you configured in Insights to run automatically as an Ansible playbook. Our playbooks will work in both Tower and Ansible Core, so you can utilize Insights automated remediation no matter what your Ansible infrastructure looks like.

    All this functionality is also available over our REST API. Resources for creating maintenance plans, obtaining playbooks, verifying systems’ state, etc. are built-in. Our API Documentation is a good place to learn more about integrating Insights detection and remediation capabilities into external systems or scripts.

    Amaze your colleagues and managers with your lightning fast response to critical infrastructure issues thanks to Ansible and Insights!

    Stay tuned for Parts 2 and 3 of our Ansible and Insights series for a walkthrough on how to quickly setup remediation with Ansible Core and Ansible Tower.

    Part 2 now available here: Ansible and Insights Part 2 - Automating Ansible Core remediation

    Posted: 2017-05-15T21:06:53+00:00
  • What’s Your Total Risk?

    Authored by: Chris Henderson

    Recently we rolled out a couple new features to help you assess and prioritize your risk. These would be the Likelihood and Impact that you will see assigned to individual Insights Rules.

    Likelihood is the probability that a system will experience impact described in the rule. Since we are trying to be proactive in detecting the conditions before there is an impact, Likelihood is an important factor when prioritizing work. The higher the Likelihood, the more urgent it is to proactively remediate the conditions, so you won’t be unexpectedly impacted.

    Impact is similarly important for determining risk. If the impact is low, then the priority to fix would be lower. Something like an intermittent performance degradation that might be low impact, versus an issue that could eat your data for lunch. Data loss would be a higher impact, generally speaking.

    When you combine Likelihood and Impact you get your Total Risk. Insights gives you these three metrics to help you make better decisions about what should be fixed first. One of the main goals of Insights is to give you the information necessary to decide what is the most important and urgent thing to fix in your environment. We strive to help you avoid being impacted by an unplanned outage.

    Posted: 2017-04-10T19:25:12+00:00
  • Keep your Satellite in orbit with Insights

    Authored by: Jonathan Newton

    For many customers, Satellite is a vital part of their infrastructure - distributing and managing package updates, organizing systems, and providing a robust virtualization infrastructure. The overall health of your Satellite system can impact much of your daily workflow within your environment. Issues with Satellite can lead you into digging through log files, googling for answers, or calling support to find the source of the problem. With Insights, you can save multiple hours of troubleshooting time by having the root cause and the solution at your fingertips.

    We have bundled these rules into one Satellite topic so that you can easily determine if Insights has detected an issue and what steps you should take to remediate it. And as usual, we’ll keep adding rules to the topic as we discover new issues related to Satellite.

    Here is a list of rules initially included in the Satellite topic:
    - Failure to synchronize content to Satellite due to deadlock in postgresql when database needs cleaning
    - Database deadlock on Satellite server when serving too many connections to postgresql
    - Decreased performance when clients with duplicate OSAD IDs connect to the Satellite server
    - Newly synced content will not be available to clients due to taskomatic service not running
    - Satellite 5 subscription certificate has expired

    Posted: 2017-02-27T15:21:28+00:00
  • Is Your Bond Strong?

    Authored by: Chris Henderson

    Most critical physical systems use multiple network interfaces bonded together to provide redundancy and, depending on the workload, to provide greater network throughput. Bonding can be configured in either manner depending on the mode specified in the bonding configuration file. It is quite common to misconfigure bonding. It is case sensitive so something might be capitalized that shouldn’t be. You might have misunderstood the documentation and configured an incorrect or suboptimal bonding mode. The Red Hat Insights team has identified a number of misconfigurations that can leave your system without the redundancy you expect, or that will degrade the network performance when you most need it. We have bundled all of these rules into one Network Bonding topic so that you can easily know whether Insights has detected an issue and, if so, what steps you should take to remediate it. We’ll keep adding rules to the topic as we discover new issues related to network bonding.

    Here is the list of rules initially included in the Network Bonding topic:

    • Decreased network performance when GRO is not enabled on all parts of bonded interfaces
    • Verify EtherChannel configuration
    • Upgrade initscripts package
    • Bonding might activate incorrect interface
    • Bonding negotiation issue
    • VLAN tagging failover issue on bonded interface
    • Unexpected behavior with bad syntax in bond config
    • Decreased network performance when not using a correct LACP hash in networking bonding
    • Monitoring disabled for network bond
    • Failure to generate a vmcore over the network when missing bonding option parameter in kdump configuration file
    Posted: 2017-02-13T15:12:49+00:00
  • Introducing Topics, Redesigned Actions & Additional Features

    Authored by: Rob Williams

    You may have noticed that the interface for Red Hat Insights underwent some changes recently. Our developers have been hard at work to provide a richer, more streamlined experience based on your feedback and recently released some new features. Here is a detailed list of recent Insights UI improvements.

    • Introducing Topics - Topics are a new way to present groups of actionable intelligence providing Insights with additional categories such as SAP, Oracle, kdump and networking.
    • Redesigned Overview - Our overview page provides a glimpse into your infrastructure health, upcoming plans, system registration. In addition, customers now receive a curated feed from Red Hat product and security teams, late-breaking vulnerabilities, and other Red Hat Insights news.
    • Provide Direct Feedback - We truly value your input and many enhancements are directly related to customer feedback. The ability to quickly and easily provide feedback is now integrated within the interface giving customers a direct line for feedback regarding any Insights features, suggested improvements, or rules.
    • Additional Views On Inventory - It is now possible to select between card or table views for enhanced sorting & filtering of systems, deployments, and assets in the inventory.
    • Enhanced Actions Page - In an effort to continue providing quick access to the most critical information in your infrastructure we have enhanced our Actions interface. New charts provide more visibility infrastructure wide to identify risk based on severity levels.
    • Notifications Icon- Receive alerts about systems not checking in, suggested or upcoming plans, or other critical Insights information from the new notifications icon. Click to view, drill down, or dismiss alerts.

    If you would like to see the latest features in development, take a look at Insights Beta.

    Posted: 2017-01-24T16:21:13+00:00
  • Put Your SAP Applications on a Firm Foundation

    Authored by: Stephen Adams

    Red Hat Insights is all about making sure your systems are running as smoothly as possible. Not just for Red Hat applications, but also for your other enterprise apps. We’ve begun developing rules tailored to large enterprise applications that could use the fine tuning expertise that Red Hat provides. We’ve nailed down the optimal settings required by SAP apps, and now Insights can let you know if those are in place on your systems.

    We’ve introduced SAP related rules for alerting you to system configurations which are not up to the specs recommended by either Red Hat or SAP. Having these enterprise apps on systems tailored to their specific needs can be greatly beneficial for the system and more importantly to the clients that have to use them.

    We want the apps you entrust to Red Hat Enterprise Linux to be as effective and efficient as you need them to be. These new rules will help you accomplish that goal.

    Rule Description Reference
    SAP application incompatibility with installed RHEL Version SAP applications will encounter compatibility errors when not running on RHEL for SAP. Overview of Red Hat Enterprise Linux for SAP Business Applications subscription
    Decreased application performance when not running sap-netweaver tuned profile with SAP applications Enable the sap-netweaver tuned profile to optimize hosts for SAP applications Overview of Red Hat Enterprise Linux for SAP Business Applications subscription
    Decreased SAP application performance when using incorrect kernel parameters When SAP's kernel parameter recommendations are not followed, SAP applications will experience decreased performance. Red Hat Enterprise Linux 6.x: Installation and Upgrade - SAP Note
    Decreased SAP application performance when file handler limits do not meet SAP requirements Current file handle limits do not meet the application requirements as defined by SAP. This results in decreased SAP application performance. Red Hat Enterprise Linux 7.x: Installation and Upgrade - SAP Note
    Time discrepancy in SAP applications when not running ntp on SAP servers SAP strongly recommends running an ntp service on systems running SAP Red Hat Enterprise Linux 7.x: Installation and Upgrade - SAP Note
    Database inconsistencies when UUIDD not running with SAP applications SAP applications require UUIDD to be installed and running in order to prevent UUIDs from being reused in the application. When UUIDD is not running, database inconsistencies can occur. Linux UUID solutions - SAP Note
    Posted: 2017-01-06T13:54:30+00:00
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.