content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
- Shipping - Misc Shipping Documents - Easy Ship - UPS Setup UPS Easy Ship Setup The Easy Ship tools allow wineries who self-fulfill their orders to streamline the fulfillment process by allowing shipping labels, for supported carriers, to be generated right from WineDirect. Easy Ship Setup: Learn which features need to be setup before using the Easy Ship tools. Learn More > Easy Ship Setup The Easy Ship tools should be included with your site setup. There a few setup tasks that must be completed before you can start generating your shipping labels: Step 1. Fill out the form below to provide WineDirect with some minimal UPS information. If you don't know what the form info is, please contact your local UPS rep and they can assist you. When setting up the credentials with UPS, ensure there are no special characters. EasyShip will error if there are special characters in the username and password. Missing Component Step 2. If you are using ShipCompliant, please follow the instructions on this link and add your information to the form at the bottom of the link.. Step 3. Ensure that all products have weights assigned to them so that rates can be accurately calculated. These can be added manually per product or using mass export/import tools. Step 4. Navigate to Settings > Misc > Pickup Locations and mark the pickup location when UPS will be picking up the package as Is Default Fulfillment Location. For complete setup instructions please see the Pickup Locations Documentation > Step 5. Navigate to Settings > Misc > Package Types to create the different packaging options that will be used when boxing various wine orders (i.e. 3 bottle, 6 bottle, 12 bottle packaging). For complete setup instructions please see the Package Types Documentation > Step 6. Update your shipping types by going to Store > Shipping > Choose your shipping strategy > Manager Shipping Types then change the code field to be the corresponding UPS type code from Easy Ship Codes here >
https://docs.winedirect.com/Shipping/Misc-Shipping-Documents/Easy-Ship/UPS-Setup
2020-03-28T21:46:01
CC-MAIN-2020-16
1585370493120.15
[]
docs.winedirect.com
Administrator Rights One of the first things you'll want to check if you're running into a QuickBooks Desktop integration issue is that you're logged into QuickBooks on an Admin account. A different user with admin rights will not work and may cause unexpected behavior Payroll Items If payroll items are not being pulled in for your users this could be the cause of a permissions issue. You can update your permissions by selecting Edit --> Integrated Applications --> click Company Preferences tab, click on Buddy Punch and then click on Properties. You will then want to check the box next to "Allow this application to access Social Security Numbers, customer credit card information, and other personal data" as shown below: _____________________ Another main reason a payroll item may not be pulled in is that the Employee in QuickBooks is not assigned a payroll item. You will want to ensure that all Employees have been assigned the correct payroll items.
https://docs.buddypunch.com/en/articles/3388308-quickbooks-desktop-payroll-items-are-not-being-synced
2020-03-28T20:52:03
CC-MAIN-2020-16
1585370493120.15
[array(['https://buddypunch.intercom-attachments-1.com/i/o/153490812/9f9dccc96f6ae9a7a78bd474/2019-10-03_1429.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/149191173/2592006429874786029f3953/Checkbox.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/69179752/7e34e46a0a07393e000bb157/payroll_items.png', None], dtype=object) ]
docs.buddypunch.com
The Gradle team is pleased to announce Gradle 4.4. First and foremost, this release of Gradle features some exciting improvements for IDE users: vswhereand VS toolchain discovery changes if you plan to use Gradle with VS 2017. eclipseplugin now provides separate output folders. This allows Eclipse plugins to provide more sophisticated classpath management. Buildship 2.2 will take advantage of this feature to avoid one large global classpath when running Java applications or executing tests in Eclipse IDE. No discussion about IDE support for Gradle would be complete without mentioning improvements to the Kotlin DSL. Version 0.13 is included in Gradle 4.4 and provides support for writing settings.gradle.kts files, Kotlin standard library extensions to the Java 7 and Java 8 APIs for use in build scripts, improvements to the plugins {} DSL, and more! See the Kotlin DSL 0.13 release notes for more details. This version of Gradle supports version ranges in parent elements of a POM. You can see an example below. C and C++ developers will enjoy better incremental builds and build cache support for C/C++ because this version of Gradle takes compiler version and system headers into account for up-to-date checks. This version of Gradle fully supports the combination of Play 2.6 and Scala 2.12, with improvements and fixes to runPlayBinary, the distributed Play start script, and other improvements. Previous versions of Gradle required that all transitive dependencies of a given plugin were present in the same repository as the plugin. Gradle 4.4 takes all plugin repositories into account and can resolve transitive plugin dependencies across them. Learn about this and other plugin repository handling improvements in the details. Last but not least, several 3rd party dependencies including Ant were updated to their latest versions containing security and other bug fixes. mavenCentral()repository URL Testtask structure eclipseplugin @Incubatingmethods pluginManagement.repositorieschanged Here are the new features introduced in this Gradle release. The eclipse plugin now defines separate output directories for each source folder. This ensures that main and test classes are compiled to different directories. The plugin also records which Eclipse classpath entries are needed for running classes from each source folder through the new gradle_scope and gradle_used_by_scope attributes. Future Buildship versions will use this information to provide a more accurate classpath when launching applications and tests. It is now possible to compile native applications with the Visual C++ toolchain packaged with all versions of Visual Studio 2017. Note that discovery of a Visual Studio 2017 installation requires the vswhere utility. Visual Studio 2017 versions earlier than update 2 do not install vswhere automatically, and so to use one of these earlier versions of Visual Studio 2017 when vswhere is not installed, you'll need to set the installation directory on the VisualCpp toolchain. The Tooling API now allows model builders to accept parameters from the tooling client. This is useful when there are multiple possible mappings from the Gradle project to the tooling model and the decision depends on some user-provided value. Android Studio, for instance, will use this API to request just the dependencies for the variant that the user currently selected in the UI. This will greatly reduce synchronization times. For more information see the documentation of the new API. When resolving an external dependency from Maven repository, Gradle now supports version ranges in a parent element of a POM, which was introduced by Maven 3.2.2.> C/C++ compilation now takes system headers, and the compiler vendor and version into account, making it safer to use those tasks with incremental build and experimental native caching. Before Gradle 4.4 changing the compiler did not make the compilation task out of date, even though different compilers may produce different outputs. Changing system headers were not detected either, so updating a system library would not have caused recompilation. This version of Gradle improves the runPlayBinary task to work with Play 2.6. disttask fixes the generated start script You can read more in the improved Play plugin user guide chapter. Special thanks to Marcos Pereira for extraordinary contributions here. // settings.gradle pluginManagement { repositories { maven { name = "My Custom Plugin Repository" url = "https://..." } } } // settings.gradle pluginManagement { repositories { gradlePluginPortal() jcenter() google() mavenCentral() } } Plugin resolution now takes all plugin repositories into account and can resolve transitive plugin dependencies across them. Previous versions of Gradle required that all transitive dependencies of a given plugin were present in the same repository as the plugin. Finally, the Gradle Plugin Portal repository can now be added to build scripts. This is particularly useful for buildSrc or binary plugin builds: // build.gradle repositories { gradlePluginPortal() } mavenCentral()repository URL In previous versions of Gradle the URL referred to by RepositoryHandler.mavenCentral() was pointing to. Sonatype recommends using the canonical URL instead. This version of Gradle makes the switch to repo.maven.apache.org when using the mavenCentral() API to avoid SSL errors due to MVNCENTRAL-2870. In this release, the Gradle team added a new chapter in the user guide documenting the Provider API. Annotating private task properties will not be allowed in Gradle 5.0. To prepare for this, Gradle 4.4 will warn about annotations on private properties. The warning is visible when building the task with the java-gradle-plugin applied: Task property validation finished with warnings: - Warning: Task type 'MyTask': property 'inputFile' is private and annotated with an input or output annotation avoiding a full rebuild of dependent projects in a multi-project builds ( -a/ --no-rebuild) were introduced in a very early version of Gradle. Since then Gradle optimized its up-to-date checking for project dependencies which renders the option obsolete. It has been deprecated and will be removed in Gradle 5.0. Testtask structure Common test framework functionality in the Test task moved to AbstractTestTask. Be aware that AbstractTestTask is the new base class for the Test task. The AbstractTestTask will be used by test frameworks outside of the JVM ecosystem. Plugins configuring an AbstractTestTask will find tasks for test frameworks (e.g., XCTest, Google Test, etc.). eclipseplugin The default output location in EclipseClasspath changed from ${project.projectDir}/bin to ${project.projectDir}/bin/default. @Incubatingmethods org.gradle.nativeplatform.tasks.InstallExecutable.setDestinationDir(Provider<? extends Directory>)was removed. Use org.gradle.nativeplatform.tasks.InstallExecutable.getInstallDirectory()instead. org.gradle.nativeplatform.tasks.InstallExecutable.setExecutable(Provider<? extends RegularFile>)was removed. Use org.gradle.nativeplatform.tasks.InstallExecutable.getSourceFile()instead. In previous versions, Gradle would prefer a version of Visual Studio found on the path over versions discovered through any other means. It will now consider a version found on the path only if a version is not found in the registry or through executing the vswhere utility (i.e. it will consider the path only as a last resort). In order to force a particular version of Visual Studio to be used, configure the installation directory on the Visual Studio toolchain. This version includes several upgrades of third-party dependencies: jackson: 2.6.6-> 2.8.9 plexus-utils: 2.0.6-> 2.1 xercesImpl: 2.9.1-> 2.11.0 bsh: 2.0b4-> 2.0b6 bouncycastle: 1.57-> 1.58 to fix the following security issues: Gradle does not expose public APIs for these 3rd-party dependencies, but those who customize Gradle will want to be aware. Gradle has been upgraded to embed Ant 1.9.9 over Ant 1.9.6. The HTTP status codes 5xx can be considered unrecoverable server states. Gradle will explicitly rethrow exceptions which occur in dependency resolution instead of quietly continue to the next repository similar to timeout issues introduced in Gradle 4.3. pluginManagement.repositorieschanged Before Gradle 4.4 it was a PluginRepositoriesSpec. This type has been removed and pluginManagement.repositories is now a regular RepositoryHandler. We would like to thank the following community members for making contributions to this release of Gradle. ClassLoader(gradle/gradle#3224) We love getting contributions from the Gradle community. For information on contributing, please see gradle.org/contribute. Known issues are problems that were discovered post release that are directly related to changes made in this release.
https://docs.gradle.org/4.4/release-notes.html
2020-03-28T20:52:46
CC-MAIN-2020-16
1585370493120.15
[]
docs.gradle.org
Automating Highlighting of Search Results in an Outlook Message Do you know about the new Outlook add-in that automates highlighting of your search string in an email message? The Visual How To, Automating Search Highlighting in Outlook 2010, provides a real add-in that you can build in Visual Studio and run with Outlook 2010, and that improves your experience searching for email content. Read on to also learn about doing programmatic searches and customizing the ribbon in Outlook! Improving Search Experience In Outlook, you can use Instant Search to search for items in a folder that contain a string of your choice. For example, you can search for “office”. Instant Search returns items that contain “office” in the subject or body, in the Outlook explorer. However, when you open a returned item in an inspector, the search string is not highlighted in the email body, as shown in the following screen shot. You will have to click Find in the Editing group of the Message tab, enter the search string again, and then click Find Next in order to see the occurrences within that email. These extra steps can be tedious if you are looking through multiple returned results to compare their content in inspectors. This add-in allows you to use just a single click to open a returned result, and all occurrences in the email body would be automatically highlighted in the mail inspector. See the following screen shot. The next screen shot shows the custom interface to use this custom search, and the regular user interface of Instant Search provided by Outlook. The custom search interface contains a text box, a search icon, and an Open Message with Highlights button. The text box plays a similar role as the Instant Search text box, and is there just to provide the search string entered by the user to the add-in. The following is the typical user search scenario: 1. Enter a search string in the text box adjacent to Search for. 2. Click Enter or the search icon to initiate the Instant Search. 3. From the list of search results returned by Instant Search, single-click a result item. 4. Click the Open Message with Highlights button. 5. The add-in opens the item in an inspector with all occurrences of the search string in the item’s body highlighted. Programmatic Search Aside from improving the search experience with automatic highlighting, this add-in shows 2 ways of performing programmatic search in Outlook: · Using Explorer.Search to perform Instant Search on items in a folder · Using the Microsoft Word object model to search in a mail or meeting item in an inspector. That is, use the Inspector.WordEditor property to obtain the Word Document object, then get the Range object that represents the entire email body, and use the Find object to search and highlight occurrences of the original search string. Note: In Outlook, because a mail folder can contain mail items as well as other item types such as meeting item responses, before opening a search result item in an inspector, the add-in first has to identify the type of the selected item, and then uses the appropriate inspector to open the item. For example, the add-in uses MailItem.Display if the item is a mail item, and MeetingItem.Display if the item is a meeting item. There are other ways to perform searches in Outlook, see Enumerating, Searching, and Filtering Items in a Folder for details. Customizing the Ribbon This add-in also provides a useful example of how to customize the ribbon to meet its specific needs: · In order to search only mail folders, this add-in adds a custom tab to the ribbon on the mail explorer, but not to the ribbon for the other explorers such as the contacts explorer. To do that, in the ribbon XML, only specify idMso as TabMail, as shown below: <ribbon> <tabs> <tab idMso="TabMail"> <group id="CustomSearch" label="Custom Search"> · When you initiate Instant Search and when Instant Search returns, Outlook defaults to the Search Tools contextual tab in the explorer. In this scenario, after returning from Instant Search, the user should return to the Custom Search tab in order to use the custom button to open the message with highlighting. To achieve that, after returning from Explorer.Search, do the following: ribbonUI.ActivateTabMso("TabMail") · If the user switches to a different folder to do a new search, the add-in should clear the search text box to prepare for a new search. The add-in listens to the Explorer.FolderSwitch event and calls the IRibbonUI.Invalidate method to refresh the custom UI See the video, and view the code for more details!
https://docs.microsoft.com/en-us/archive/blogs/officedevdocs/automating-highlighting-of-search-results-in-an-outlook-message
2020-03-28T22:08:02
CC-MAIN-2020-16
1585370493120.15
[array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/92/50/8865.Search%20Highlighting%20no%20highlights.jpg', None], dtype=object) array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/92/50/1376.Search%20Highlighting%20with%20highlights.jpg', None], dtype=object) array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/92/50/1781.Search%20Highlighting%20UI.jpg', None], dtype=object) array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/92/50/3362.Search%20Highlighting%20Custom%20UI.jpg', None], dtype=object) ]
docs.microsoft.com
Customize Layout Mode RadDataLayout's Customize dialog enables a complete transformation of the control`s layout at run time. Customize Dialog The customize dialog can be opened from the default context menu of RadDataLayout. Figure 1: Customize Dialog Perform Changes The Items tab contain the available elements which can be added to the control in order to change its layout. The Structure tab displays all items as part of the control`s element tree, for complex layout, this tab will provide easy navigation. Figure 2: Changes at Runtime. Figure 3: DragOverlay
https://docs.telerik.com/devtools/winforms/controls/datalayout/customize-layout-mode
2020-03-28T22:06:50
CC-MAIN-2020-16
1585370493120.15
[array(['images/datalayout-customize-layout-mode001.png', 'datalayout-customize-layout-mode 001'], dtype=object) array(['images/datalayout-customize-layout-mode002.gif', 'datalayout-customize-layout-mode 002'], dtype=object) array(['images/datalayout-customize-layout-mode003.png', 'datalayout-customize-layout-mode 002'], dtype=object)]
docs.telerik.com
Caiman HDF5 Importer¶ You can import HDF5 files containing CNMF results that were produced externally by Caiman. The ROIs produced by CNMF, 3D-CNMF or CNMFE will be imported into the current work environment and placed onto the image that is currently open. You can also use this module through the viewer console, or in the Script Editor instead of clicking buttons. Example
http://docs.mesmerizelab.org/en/v0.2.3/user_guides/viewer/modules/caiman_hdf5_importer.html
2020-09-18T19:41:49
CC-MAIN-2020-40
1600400188841.7
[array(['../../../_images/caiman_hdf5_importer.png', '../../../_images/caiman_hdf5_importer.png'], dtype=object)]
docs.mesmerizelab.org
Bunifu Ellipse adds smooth edges to any control. How to use Ellipse control on a form To use Bunifu Ellipse simply locate it in your toolbox and drag it to the desired control. In this case we shall be making our form borderless and have smooth corners Upon dragging Bunifu Drag to our form it will make our form borderless and rounded by default. We can customize drag properties by clicking bunifuEllipse1 component then go to the Properties window. We will set the ellipse radius to a value of 7 for this example. You should see the result as shown below: How to use Bunifu Ellipse control on any control Once we have the component added to our form, we can apply it to a control. For example, let’s drop a Bunifu Flat Button on our form and then run the following code in the Load event handler on our form: C# code private void Form1_Load(object sender, EventArgs e) { this.bunifuElipse1.ApplyElipse(bunifuFlatButton1, 7); } VB.NET code Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load BunifuElipse1.ApplyElipse(bunifuFlatButton1, 7) End Sub Upon running the code you should see an effect like below: So basically, we can apply the ellipse to any control using the ApplyEllipse method. The method is overloaded as you can see here: - ApplyEllipse( ) - applies the ellipse on the current control (it’s called by default) - ApplyEllipse(Control control) - applies the ellipse on the specified control, with the ellipse radius set in the Design Time - ApplyEllipse(Control control, int ellipseRadius) - applies the ellipse on the specified control with the specified radius Custom properties - EllipseRadius - the radius of the ellipse component (Integer). We recommend EllipseRadius values below 7, for smooth curves. Beyond that value, the curves will look pixelated - TargetControl - the control on which Bunifu Ellipse is applied (Control) That's it! We hope Bunifu Ellipse will help your create elegant UI interfaces that give your users great user experience. Should you have feedback or suggestions please send us via chat on the bottom right corner of the screen.
https://docs.bunifuframework.com/en/articles/2272900-bunifu-ellipse
2020-09-18T19:50:51
CC-MAIN-2020-40
1600400188841.7
[array(['https://downloads.intercomcdn.com/i/o/73333240/a83580cfe1391bba91a31d18/1-23%5B1%5D.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/73333678/01f8357e2c118aeb3b93c6b3/4-9%5B1%5D.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/73334374/50d0ca1ed076b2857b4a07c5/5-7%5B1%5D.png', None], dtype=object) ]
docs.bunifuframework.com
Built-in Utility Functions Utility functions are available that can be used within map()and reduce()functions. Couchbase Server incorporates different utility functions beyond the core JavaScript functionality that can be used within map() and reduce() functions where relevant. dateToArray(date) Converts a JavaScript Date object or a valid date string such as "2012-07-30T23:58:22.193Z" into an array of individual date components. For example, the previous string would be converted into a JavaScript array: [2012, 7, 30, 23, 58, 22] The function can be particularly useful when building views using dates as the key where the use of a reduce function is being used for counting or rollup. Currently, the function works only on UTC values. Timezones are not supported. decodeBase64(doc) Converts a binary (base64) encoded value stored in the database into a string. This can be useful if you want to output or parse the contents of a document that has not been identified as a valid JSON value. sum(array) When supplied with an array containing numerical values, each value is summed and the resulting total is returned. For example: sum([12,34,56,78])
https://docs.couchbase.com/server/6.5/learn/views/views-writing-utility.html
2020-09-18T19:16:16
CC-MAIN-2020-40
1600400188841.7
[]
docs.couchbase.com
The "QR Code" component is an extra component which let you add QR code in your slides, useful for example to display links and url and if you wish your audience to easily access them. To generate the QR code, the project qrcode-generator from Kazuhiko Arase is used. This Web Component let you generate QR code like the following as svg (default) or img: Optionally you could also display a logo over your QR code: - QR Code in your project from npm using the following command: npm install @deckdeckgo/qrcode The Stencil documentation provide examples of framework integration for Angular, React, Vue and Ember. That being said, commonly, you might either import or load it: import '@deckdeckgo/qrcode'; import { defineCustomElements as deckDeckGoElement } from '@deckdeckgo/qrcode/dist/loader'; deckDeckGoElement(); The <deckgo-qrcode/> Web Component will generate per default a <svg/> QR code with a correction level set to high. Optionally, it's also possible to generate the QR code as an <img/> and/or to display a logo over it. To display a logo over your QR code, this Web Component provide a slot called logo. The <deckgo-qrcode/> expose the following properties: The <deckgo-qrcode/> could be styled using the following CSS4 variables which would only applies on the type <svg/>: In oder to style QR code if its type is set to <img/>, you will need to use properties instead of CSS4 variables. The <deckgo-qrcode/> component exposes the following method in case you would like to refresh your QR code, for example on resize of the window on in case you would set its content asynchronously: generate() => Promise<void> You could find all the examples in the src/index.html of the project. <deckgo-qrcode </deckgo-qrcode> Example with a logo: <deckgo-qrcode <img slot="logo" src="my-logo.svg"/> </deckgo-qrcode> It's possible to display a logo over your QR Code as the code generated with this Web Component have a correction level set to high meaning, if I understand correctly, that your content is encoded and displayed multiple times inside the QR code. Therefore, even if the logo cover a part of it, it will be still possible for a reader to read the content from "somewhere else" in the code. However, test it carefully and play with the colours, cell-size and size of your code to ensure its readability.
https://docs.deckdeckgo.com/components/qrcode/
2020-09-18T20:15:30
CC-MAIN-2020-40
1600400188841.7
[]
docs.deckdeckgo.com
DeckDeckGo bundles your presentation as a Progressive Web App which then can be hosted on any Web Server or hosting solution. Not sure what PWAs are? Check out Ionic's PWA Overview for more info. It is worth to notice that DeckDeckGo is, respectively your slides build with are, SEO friendly. Therefore, you do not need to implement a complex server-side rendering (SSR) hosting solution. Before your final build and most importantly before publishing your deck online, don't forget to edit the information about your presentation in the following files: Edit the meta tags in the <head/> of your src/index.html file Generate your icons and replace the respective files in the assets folder. I suggest using RealFaviconGenerator, which is a great tool for this purpose. Update the information in the manifest.json file When you are ready for your talk or ready to publish your slides online, run the following command in a terminal to bundle your presentation for production: npm run build If you do not wish to remove your notes from your presentation, run the build command with the attributes --notes: npm run build -- --notes If you wish to run your presentation locally afterwards without rebuilding everything, you could run the following command to start only the dev server: npm run dev
https://docs.deckdeckgo.com/docs/publishing/
2020-09-18T19:48:21
CC-MAIN-2020-40
1600400188841.7
[]
docs.deckdeckgo.com
PictureEditOptionsMask.Offset Property Gets or sets the offset of the mask relative to the image. Namespace: DevExpress.XtraEditors.Controls Assembly: DevExpress.XtraEditors.v20.1.dll Declaration [SupportedMaskOption(SupportedMaskOptionKind.NotNone)] public Point Offset { get; set; } <SupportedMaskOption(SupportedMaskOptionKind.NotNone)> Public Property Offset As Point Property Value Property Paths You can access this nested property as listed below: Remarks A mask is aligned within the target image as specified by the PictureEditOptionsMask.MaskLayoutMode property. The PictureEditOptionsMask.Margin and Offset properties can be used to change the boundaries of the mask. The PictureEditOptionsMask.
https://docs.devexpress.com/WindowsForms/DevExpress.XtraEditors.Controls.PictureEditOptionsMask.Offset
2020-09-18T21:17:23
CC-MAIN-2020-40
1600400188841.7
[]
docs.devexpress.com
This guide covers all topics of special interest to application administrators. This guide provides information for application administrators, or users with administrative access privileges. Before addressing the major components of this guide, we recommend that you familiarize yourself with the general top-level architecture of the ThoughtSpot service. Administrators are responsible for many facets of the ThoughtSpot service. They are most frequently in charge of these common processes: - Installation and setup of ThoughtSpot - Managing users and groups - Security - System administration - Backup and Restore Additionally, administrators are often involved in the following workflows: - Data modeling - Using worksheets to simplify search - Using views for ‘stacked’ search; note that starting with Release 5.2, you can accomplish some aspects of search stacking by using the INkeyword (See). - Beta Enabling SearchIQ, ThoughtSpot’s natural language search. - Managing scheduled jobs - Monitoring system health - Troubleshooting
https://docs.thoughtspot.com/6.0/admin/intro.html
2020-09-18T20:53:08
CC-MAIN-2020-40
1600400188841.7
[]
docs.thoughtspot.com
TOPICS× Using rule sets to transform URLs You. - Adding metadata to the URL for SEO (Search Engine Optimization) purposes. -. Use caution when using rulesets; they can prevent Dynamic Media content from being displayed on your website.: To deploy XML rule sets: - Log on to your Dynamic Media Classic. - On the navigation bar near the top of the page, click Setup > Application Setup > Publish Setup > Image Server . - On the Image Server Publish page, under the Catalog Management group, locate Rule Set Definition File Path , then click Select . - On the Select Rule Set Definition File (XML) page, browse to your rule set file, then in the lower-right corner of the page, click Select . - In the lower-right corner of the Setup page, click Close . - Run an Image Server Publish job.The rule set conditions are applied on the requests to the live Dynamic Media Image Servers.If you make changes to the rule set file, the changes are immediately applied when you re-upload and re-publish the updated rule set file.
https://docs.adobe.com/content/help/en/experience-manager-64/assets/dynamic/using-rulesets-to-transform-urls.html
2020-09-18T21:36:27
CC-MAIN-2020-40
1600400188841.7
[]
docs.adobe.com
Show pageOld revisionsBacklinksBack to top Differences This shows you the differences between two versions of the page. View differences: Side by SideInline Go Link to this comparison view Go Go en:3.6:teacher:chat [2017/05/19 08:08] en:3.6:teacher:chat [2017/05/19 08:08] (current) Line 1: Line 1: +==== Chat ==== +The subsystem "Chat" which provides messaging to chat format. To start a conversation simply enter the desired text and press the send button. + +[{{ :en:teacher:chat:start_conversation.jpg?500 |Start the conversation}}] + +The features provided by this module are stored and the "cleaning" of a conversation. To store a call simply press the "Save" button. In this way this conversation is stored in documents passions format txt + +[{{ :en:teacher:chat:save_conversation.jpg?500 |Save conversation}}] + +[{{ :en:teacher:chat:save_chat_documents.jpg?500 |Save chat documents Course}}] + +About Cleaning a call simply press the cleaning button + +[{{ :en:teacher:chat:chat_cleaning.jpg?500 |Chat Cleaning}}] + +[{{ :en:teacher:chat:chat_cleaning_2.jpg?500 |Talks cleaning effect}}] + Last modified: 2017/05/19 08:08
https://docs.openeclass.org/en/3.6/teacher/chat?do=diff&rev2%5B0%5D=&rev2%5B1%5D=1495170536&difftype=sidebyside
2020-09-18T21:22:16
CC-MAIN-2020-40
1600400188841.7
[]
docs.openeclass.org
Writers¶ Satpy makes it possible to save datasets in multiple formats. For details on additional arguments and features available for a specific Writer see the table below. Most use cases will want to save datasets using the save_datasets() method: >>> scn.save_datasets(writer='simple_image') The writer parameter defaults to using the geotiff writer. One common parameter across almost all Writers is filename and base_dir to help automate saving files with custom filenames: >>> scn.save_datasets( ... filename='{name}_{start_time:%Y%m%d_%H%M%S}.tif', ... base_dir='/tmp/my_ouput_dir') Changed in version 0.10: The file_pattern keyword argument was renamed to filename to match the save_dataset method’s keyword argument. Available Writers¶ To get a list of available writers use the available_writers function: >>> from satpy import available_writers >>> available_writers() Colorizing and Palettizing using user-supplied colormaps¶ Note In the future this functionality will be added to the Scene object. It is possible to create single channel “composites” that are then colorized using users’ own colormaps. The colormaps are Numpy arrays with shape (num, 3), see the example below how to create the mapping file(s). This example creates a 2-color colormap, and we interpolate the colors between the defined temperature ranges. Beyond those limits the image clipped to the specified colors. >>> import numpy as np >>> from satpy.composites import BWCompositor >>> from satpy.enhancements import colorize >>> from satpy.writers import to_image >>> arr = np.array([[0, 0, 0], [255, 255, 255]]) >>> np.save("/tmp/binary_colormap.npy", arr) >>> compositor = BWCompositor("test", standard_name="colorized_ir_clouds") >>> composite = compositor((local_scene[10.8], )) >>> img = to_image(composite) >>> kwargs = {"palettes": [{"filename": "/tmp/binary_colormap.npy", ... "min_value": 223.15, "max_value": 303.15}]} >>> colorize(img, **kwargs) >>> img.show() Similarly it is possible to use discrete values without color interpolation using palettize() instead of colorize(). You can define several colormaps and ranges in the palettes list and they are merged together. See trollimage documentation for more information how colormaps and color ranges are merged. The above example can be used in enhancements YAML config like this: hot_or_cold: standard_name: hot_or_cold operations: - name: colorize method: &colorizefun !!python/name:satpy.enhancements.colorize '' kwargs: palettes: - {filename: /tmp/binary_colormap.npy, min_value: 223.15, max_value: 303.15} Saving multiple Scenes in one go¶ As mentioned earlier, it is possible to save Scene datasets directly using save_datasets() method. However, sometimes it is beneficial to collect more Scenes together and process and save them all at once. >>> from satpy.writers import compute_writer_results >>> res1 = scn.save_datasets(filename="/tmp/{name}.png", ... writer='simple_image', ... compute=False) >>> res2 = scn.save_datasets(filename="/tmp/{name}.tif", ... writer='geotiff', ... compute=False) >>> results = [res1, res2] >>> compute_writer_results(results)
https://satpy.readthedocs.io/en/stable/writers.html
2020-09-18T20:31:06
CC-MAIN-2020-40
1600400188841.7
[]
satpy.readthedocs.io
MobileVRInterface¶ Inherits: ARVRInterface < Reference < Object Generic mobile VR implementation. Description¶ This is a generic mobile VR implementation where you need to provide details about the phone and HMD used. It does not rely on any existing framework. This is the most basic interface we have. For the best effect, you need a mobile phone with a gyroscope and accelerometer. Note that even though there is no positional tracking, the camera will assume the headset is at a height of 1.85 meters. You can change this by setting eye_height. You can initialise this interface as follows: var interface = ARVRServer.find_interface("Native mobile") if interface and interface.initialize(): get_viewport().arvr = true Property Descriptions¶ The distance between the display and the lenses inside of the device in centimeters. The width of the display in centimeters. The height at which the camera is placed in relation to the ground (i.e. ARVROrigin node). The interocular distance, also known as the interpupillary distance. The distance between the pupils of the left and right eye. The k1 lens factor is one of the two constants that define the strength of the lens used and directly influences the lens distortion effect. The k2 lens factor, see k1. The oversample setting. Because of the lens distortion we have to render our buffers at a higher resolution then the screen can natively handle. A value between 1.5 and 2.0 often provides good results but at the cost of performance.
https://docs.godotengine.org/zh_CN/latest/classes/class_mobilevrinterface.html
2020-09-18T21:10:16
CC-MAIN-2020-40
1600400188841.7
[]
docs.godotengine.org
In TMClubSchedule, membership level is a way to classify members based on how long they've been a member. Each role hase a required Membership Level (or experience level). The membership level is then used to determine whether a member is eligible to fill a role such as Grammarian, Speaker, Toastmasters, etc. The level name itself can be anything, you decide yourself what will makes sense for your club setup. Here are some example just to give you idea: The membership level will be used for defining Role Eligibility1. For example, when we added new Role, let's say "Speaker", we can tell which level are eligible to be assigned for this "Speaker" role. This section assume that you are already logged in into your Club Admin Dashboard. To list & manage existing levels, go to Members --> Membership Level On the Membership list page above, click the Add ( +) button, will bring you to "Add New Level" page: Fill in the form, then click submit to save it. On the Membership list page above, find the entry you want to edit, then click Edit ( pencil) button next to it. On the Membership list page above, find the entry you want to delete, then click Delete ( trash) button next to it. See Managing Roles section for more detail. ↩
https://docs.tmclubschedule.com/vpedu/members/membership-levels
2020-09-18T20:22:33
CC-MAIN-2020-40
1600400188841.7
[]
docs.tmclubschedule.com
You can monitor your PHP applications using our PHP client library, the Linux Agent, and the Metricly StatsD server. metricly-agent.conffile. enabled = True. # local statsd server [[[statsd]]] enabled = True 3. StatsD requires a client library to push metrics. You can use our PHP client library or an open source alternative. 4. Include the client library file on any page you want to collect metrics, or you can reference the file globally. include 'StatsD_Client.php'; 5. Instrument your application code by calling the appropriate functions. Check out the example below or the timer example included in the library repo. //add a gauge in any section of your code .gauge('test.data.gauge', 20); //to add this time you must first add code to calculate the start and end time. //typically we use epoch time. And add code at the start and end of the code you want to measure. //the timing function expects time in milliseconds .timing('test.data.timer', 1000, 1); //you can add or subtract from any metric with these functions .increment('test.data.counterup', 1); .decrement('test.data.counterdown', 1); 6. Save and restart both your application and the Linux Agent.
https://docs.metricly.com/integrations/php/
2020-09-18T19:27:45
CC-MAIN-2020-40
1600400188841.7
[]
docs.metricly.com
I used to use Microsoft Flow finely. But now nothing work for me for more than a years. I can sign in, see my Flows. But I can't manage anythings other than that. When access "Data -> Connections" page it take forever to finish with a loading indicator. Open browser console and it show this error. >app.2e2c0938dc9ddada0315.2.js:1 Uncaught (in promise) ClientError: Unable to get token for audience 'a8f7a65c-f5ba-4859-b2d6-df772c264e9d'. Details: 'AADSTS50020: User account '{EmailHidden}' from identity provider 'live.com' does not exist in tenant 'Microsoft Services' and cannot access the application '6204c1d1-4712-4c46-a7d9-3ed63d992682'(Microsoft Flow Portal) in that tenant. The account needs to be added as an external user in the tenant first. Sign out and sign in again with a different Azure Active Directory user account. Trace ID: 09e3cb9b-122b-4655-8c92-ce03722cac00 Correlation ID: b5ca662b-ff8d-40c3-a79f-593f8aed27c6 Timestamp: 2020-06-04 00:20:41Z'. How is it possible that Microsoft account with email @hotmail.com from provider live.com does not exist in tenant Microsoft Services? How can I (or someone) fix this?
https://docs.microsoft.com/en-us/answers/questions/31833/microsoft-flow-failed.html
2020-09-18T20:28:04
CC-MAIN-2020-40
1600400188841.7
[]
docs.microsoft.com
After activating the header top bar, you can customize its content i.e. Header Info text and Social Links. To Display the header Info text, - Go to Appearance > Customize > Header > Top Bar - Go to Header Info Text - Add phone numbers, other contact info as you like. You can also add shortcodes. - Click on Publish.
https://docs.themegrill.com/spacious/how-to-display-header-info-text/
2020-09-18T20:45:33
CC-MAIN-2020-40
1600400188841.7
[array(['https://docs.themegrill.com/spacious/wp-content/uploads/sites/3/2020/09/header-infotext-1024x461.png', None], dtype=object) ]
docs.themegrill.com
Any configuration change that affects the port number, IP address, hardware virtual servers or secure/non-secure setting, will affect the LifeKeeper configuration. As there is no direct linkage between the server and this kit, you should follow this procedure to synchronize the configuration. - Remove protection of the IIS resource by taking the hierarchy out of service and then deleting the hierarchy. - On the primary server, run the IIS Console as appropriate and apply the changes to the server. - On the backup server, run the IIS Console and apply the changes to the server. - Add LifeKeeper protection to the IIS resource by creating the IIS resource hierarchy and extending it to the backup server. Feedback Thanks for your feedback. Post your comment on this topic.
http://docs.us.sios.com/sps/8.7.0/en/topic/changing-lifekeeper-microsoft-iis-recovery-kit-configuration
2020-09-18T21:10:15
CC-MAIN-2020-40
1600400188841.7
[]
docs.us.sios.com
Backing up the Presentation Server database To:. Changing the automatic database backup settings The following table lists the default automatic database backup settings. To change a setting, run the appropriate command from the <Presentation Server installation directory>\truesightpserver\bin folder. Restoring a backup of the database - Stop the TrueSight Presentation Server service by running the following command from the <Presentation Server installation directory>\truesightpserver\bin folder: tssh server stop - Move the files in the <Presentation Server installation directory>\truesightpserver\data\pgsql folder to a temporary folder. - Extract the contents of the database backup file in the <Presentation Server installation directory>\truesightpserver\data\dbbackup folder to the the <Presentation Server installation directory>\truesightpserver\data\pgsql folder. - Restart the TrueSight Presentation Server service by running the following command from the <Presentation Server installation directory>\truesightpserver\bin folder: tssh server start - If you encounter any issues after restoring a backup of the database, use the files that you moved to a temporary folder in Step 2. Note On Linux computers, add & at the end of the tssh server start command so that the process runs in the background and you can continue to use the shell. Note When you restore the database, some VM instances or devices are marked for deletion. Related topics Setting up and managing the App Visibility components and databases Configuring and managing the Presentation Server
https://docs.bmc.com/docs/TSPS/110/backing-up-the-presentation-server-database-790478475.html
2020-09-18T21:10:14
CC-MAIN-2020-40
1600400188841.7
[]
docs.bmc.com
Features The DataStax Enterprise C# Driver is feature-rich and highly tunable C# client library for DataStax Enterprise. Usage - Address resolution - Authentication - Automatic failover - Components - Connection heartbeat - Connection pooling - CQL data types to C# types - Geospatial types support - Graph support - Native protocol - Parametrized queries - Query timestamps - Query warnings - Result paging - Routing queries - Speculative executions - Tuning policies - User-defined functions and aggregates - User-defined types
https://docs.datastax.com/en/developer/csharp-driver-dse/2.1/features/
2020-09-18T21:23:39
CC-MAIN-2020-40
1600400188841.7
[]
docs.datastax.com
GeoServer Docker¶ Last Updated: December 10, 2016 Features¶ The GeoServer docker was updated to version 2.8.3 It can be configured to run in clustered mode (multiple instances of GeoServer running inside the container) for greater stability and performance Several extensions are now included: Installation¶ Installing the GeoServer Docker is done using the Tethys Command Line Interface (see: docker command ). To install it, open a terminal, activate the Tethys virtual environment, and execute the command: . /usr/lib/tethys/bin/activate tethys docker init -c geoserver This command will initiate the download of the GeoServer Docker image. Once the image finishes downloading it will be used to create a Docker container and you will be prompted to configure it. Here is a brief explanation of each of the configuration options: GeoServer Instances Enabled: This is the number of parallel running GeoServer's to start up when the docker starts. All of the GeoServer instances share the same directory and remain synced via the JMS clustering extension (see: JMS Clustering Documentation). Access to the instances is automatically load balanced via NGINX. The load-balanced cluster of GeoServers is accessed using port 8181 and this should be used as the endpoint for you GeoServer docker. You will notice that the identifier of the node appears in the top left corner of the GeoServer admin page. When accessing the admin page of the cluster using port 8181, you will always be redirected to the first node. Any changes to this node will be synced to the other nodes, so usually it will be sufficient to administer the GeoServer this way. However, you can access the admin pages of each node directly using ports 8081-8084, respectively, for troubleshooting. GeoServer Instances with REST Enabled: The number of running GeoServer instances that have the REST API enabled. Tethys Dataset Services uses the rest API to manage data (create, read, update, delete) in GeoServer. It is a good idea to leave a few of your GeoServer nodes as read-only (REST disabled) to retain access to GeoServer data even when it is processing data. To configure it this way, be sure this number is less than the number of enabled GeoServer nodes. Control Flow Options: The control flow extenion allows you to limit the number of requests that are allowed to run simultaneously, placing any excess requests into a queue to be executed late. This prevents your GeoServer from becoming overwhelmed during heavy traffic periods. There are two ways to configure control flow during setup: Automatically derive flow control options based on the number of cores of your computer (recommended for development or inexperienced developers) Explicitly set several of the most useful options (useful for a production installation and more experienced developers) Note If you bind the geoserver data directory to the host machine (highly recommended), you can edit these options by editing the controlflow.propertiesfile which is located in the geoserver data directory. Refer to the Control Flow documentation for more details (see: Control Flow Documentation). Max Timeout: The amount of time in seconds to wait before terminating a request. Min and Max Memory: The amount of memory to allocate as heap space for each GeoServer instance. It is usually a good idea to set the min to be the same as the max to avoid the overhead of allocating additional memory if it is needed. 2 GB per instance is probably the maximum you will need for this and the default of 1 GB is likely to be sufficient for many installations. Warning BE CAREFUL WITH THIS. If you set the min memory to be 2 GB per instance and 4 instances enabled, GeoServer will try to allocate 8GB of memory. If your machine doesn't have 8GB of memory, it will get overwhelmed and lock down. Bind the GeoServer data directory to the Host (HIGHLY RECOMMENDED): This allows you to mount one of the directories on your machine into the docker container. Long story short, this will give you direct access to the GeoServer data directory outside of the docker container. This is useful if you want to configure your controlflow.properties, add data directly to the data directory, or view the files that were uploaded for debugging. The GeoServer docker container will automatically add the demo data to this directory after starting up the first time. Warning If the directory that you are binding to doesn't exist or you don't have permission to write to it, the setup operation may fail. To be safe you should create the directory before hand and ensure you can write to it. Migrate to New GeoServer Docker¶ Use these instructions to migrate the data in a GeoServer 2.7.0 Docker to a newer version. You can see the version of GeoServer is displayed on the main admin page of GeoServer. Extract data from GeoServer docker (the container that Tethys creates for GeoServer is named tethys_geoserver) mkdir ~/backup cd ~/backup docker run --rm --volumes-from tethys_geoserver -v $(pwd):/backup ubuntu:14.04 tar cvf /backup/backup.tar /var/lib/geoserver/data Rename old GeoServer docker container as a backup and verify that it was renamed docker rename tethys_geoserver tethys_geoserver_bak docker ps -a Pull new docker container (only in Tethys versions 1.4.0+) . /usr/lib/tethys/bin/activate tethys docker init Respond to the prompts to configure the new GeoServer container, which can be configured to run in a clustered mode (refer to the explanation of the configuration parameters in the installation instructions). After the new GeoServer installs, start it up and visit the admin page () to make sure it is working properly. This also adds the data from the GeoServer to the data directory on the host, so DON'T SKIP THIS STEP. When you are done stop the GeoServer docker. tethys docker start -c geoserver tethys docker stop -c geoserver Browse to the directory where you bound the GeoServer data directory (default is /usr/lib/tethys/geoserver): cd /usr/lib/tethys/geoserver ls -alh data/ You should see the contents of the data directory for the GeoServer docker container. Notice that everything is owned by root. This is because the container runs with the root user. To restore the data from your old container, you will need to delete the contents of this directory and copy over the the data in the tar file in ~/backup. sudo rm -rf data/ cp ~/backup/backup.tar . tar xvf backup.tar --strip 3 rm backup.tar Listing the contents of data again, you should see the data restored from your previous GeoServer docker: ls -alh data/ Start up the GeoServer container again. tethys docker start -c geoserver The layer preview and some other features of GeoServer will not work properly until you set the Proxy Base URL due to the clustered configuration of the GeoServer. Navigate to Settings > Global and locate the Proxy Base URL field and enter the external URL of your GeoServer (e.g.:). Note Logging in as admin: sometimes it doesn't work the first time (or second, third or forth for that matter). Try, try again until it works. Once you are confident that the data has been successfully migrated from the old GeoServer container to the new one, you should delete the old GeoServer container: docker rm tethys_geoserver_bak
http://docs.tethysplatform.org/en/latest/software_suite/geoserver.html
2020-09-18T20:04:47
CC-MAIN-2020-40
1600400188841.7
[]
docs.tethysplatform.org
End-of-Life (EoL) BFD Overview When you enable BFD, BFD establishes a session from one endpoint (the firewall) to its BFD peer at the endpoint of a link using a three-way handshake. Control packets perform the handshake and negotiate the parameters configured in the BFD profile, including the minimum intervals at which the peers can send and receive control packets. BFD control packets for both IPv4 and IPv6 are transmitted over UDP port 3784. BFD control packets for multihop support are transmitted over UDP port 4784. BFD control packets transmitted over either port are encapsulated in the UDP packets. After the BFD session is established, the Palo Alto Networks implementation of BFD operates in asynchronous mode, meaning both endpoints send each other control packets (which function like Hello packets) at the negotiated interval. If a peer does not receive a control packet within the detection time (calculated as the negotiated transmit interval multiplied by a Detection Time Multiplier), the peer considers the session down. (The firewall does not support demand mode, in which control packets are sent only if necessary rather than periodically.) When you enable BFD for a static route and a BFD session between the firewall and the BFD peer fails, the firewall removes the failed route from the RIB and FIB tables and allows an alternate path with a lower priority to take over. When you enable BFD for a routing protocol, BFD notifies the routing protocol to switch to an alternate path to the peer. Thus, the firewall and BFD peer reconverge on a new path. A BFD profile allows you to Configure BFD settings and apply them to one or more routing protocols or static routes on the firewall. If you enable BFD without configuring a profile, the firewall uses its default BFD profile (with all of the default settings). You cannot change the default BFD profile. When an interface is running multiple protocols that use different BFD profiles, BFD uses the profile having the lowest Desired Minimum Tx Interval. See BFD for Dynamic Routing Protocols. Active/passive HA peers synchronize BFD configurations and sessions; active/active HA peers do not. BFD is standardized in RFC 5880. PAN-OS does not support all components of RFC 5880; see Non-Supported RFC Components of BFD. PAN-OS also supports RFC 5881, Bidirectional Forwarding Detection (BFD) for IPv4 and IPv6 (Single Hop). In this case, BFD tracks a single hop between two systems that use IPv4 or IPv6, so the two systems are directly connected to each other. BFD also tracks multiple hops from peers connected by BGP. PAN-OS follows BFD encapsulation as described in RFC 5883, Bidirectional Forwarding Detection (BFD) for Multihop Paths. However, PAN-OS does not support authentication. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/pan-os/7-1/pan-os-admin/networking/bfd/bfd-overview.html
2020-09-18T21:35:16
CC-MAIN-2020-40
1600400188841.7
[]
docs.paloaltonetworks.com
When you load data, ThoughtSpot uses defaults for data modeling metadata. You can change these defaults using the data modeling file if you have access to the Data > Settings > Business Data Model page. Editing this file allows you to view and edit all the system data columns. Data tab in the top navigation bar. Click Settings, then click Business Data Model. Click Download. Edit the file and change the settings You can make changes to the settings using this procedure. To see a list of the changes you can make, see Data modeling settings. You can edit any of the values in the model file, except for those where the words DoNotModify appear under.
https://docs.thoughtspot.com/latest/admin/data-modeling/edit-model-file.html
2020-09-18T19:07:12
CC-MAIN-2020-40
1600400188841.7
[]
docs.thoughtspot.com
Configure SMTP for outbound emails Horde can be configured to use either a local mail agent or a third-party SMTP server for outgoing email. For example, to configure Horde with a Gmail account using IMAP for incoming email and SMTP for outgoing email, update the /opt/bitnami/apps/horde/htdocs/imp/config/backends.php file as shown below: $servers['imap'] = array( 'disabled' => false, 'name' => 'Gmail IMAP server', 'hostspec' => 'imap.gmail.com', 'hordeauth' => false, 'protocol' => 'imap', 'port' => 993, 'secure' => 'ssl', 'maildomain' => '', 'smtp' => array( 'auth' => true, 'localhost' => 'localhost', 'host' => 'smtp.gmail.com', 'password' => 'PASSWORD', 'port' => 587, 'secure' => 'tls', 'username' => 'USERNAME' ), 'cache' => false, ); Replace USERNAME and PASSWORD with your Gmail account username and password respectively. For more information, refer to the Horde documentation..
https://docs.bitnami.com/oracle/apps/horde/configuration/configure-smtp/
2020-07-02T21:30:36
CC-MAIN-2020-29
1593655880243.25
[]
docs.bitnami.com
Northwind Hosting Concept Demo For the last months our team has invested a lot of time exploring and researching the relationship between ISVs and Hosters. I shared some of the early thoughts and findings in a series of posts in this blog: Part III - Billing, Metering. Update 1/26/2010: fixed broken video link.
https://docs.microsoft.com/en-us/archive/blogs/eugeniop/northwind-hosting-concept-demo
2020-07-02T23:13:45
CC-MAIN-2020-29
1593655880243.25
[]
docs.microsoft.com
Aceinna OpenIMU 330ZA¶ Hardware¶ Platform Aceinna IMU: Open-source, embedded development platform for Aceinna IMU hardware. Run custom algorithms and navigation code on Aceinna IMU/INS hardware. Configuration¶ Please use OpenRTK ID for board option in “platformio.ini” (Project Configuration File): [env:OpenRTK] platform = aceinna_imu board = OpenRTK You can override default Aceinna OpenIMU 330ZA settings per build environment using board_*** option, where *** is a JSON object path from board manifest OpenRTK.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:OpenRTK] platform = aceinna_imu board = OpenRTK ; change microcontroller board_build.mcu = stm32f469IG ; change MCU frequency board_build.f_cpu = 180000000L Uploading¶ Aceinna OpenIMU 330ZA supports the next uploading protocols: blackmagic jlink stlink Default protocol is stlink You can change upload protocol using upload_protocol option: [env:OpenRTK] platform = aceinna_imu board = OpenRT). Aceinna OpenIMU 330ZA does not have on-board debug probe and IS NOT READY for debugging. You will need to use/buy one of external probe listed below.
https://docs.platformio.org/en/latest/boards/aceinna_imu/OpenRTK.html
2020-07-02T22:12:32
CC-MAIN-2020-29
1593655880243.25
[]
docs.platformio.org
[Legacy] Commands Audit The Commands Audit module provides Sysdig Secure users with a searchable and sortable audit trail of user commands executed within the infrastructure. Note While policy events are an inherently suspicious activity that warrants investigation, commands are not themselves considered suspicious. The Sysdig Agent examines all execve events. Information about commands that meet the following criteria is saved by the Sysdig backend, and made available for review as a command entry in the Commands Audit module table: A program was launched by a shell associated with a terminal (i.e. is related to a user-entered command). The parent process was launched in a running container (i.e. the result of a docker exec <container>command). Warning If an excessive volume of commands occurs in a given second, some commands may be excluded from the information sent from the agent to the Sysdig backend. The table below outlines the information displayed in the Command Audits module: Review a Command Individual commands can be reviewed by selecting the line item in the Commands Audit module table. This opens the Command Details window: The table below outlines the information displayed in the Command Details window: Filtering the Commands Table The Commands Audit module's table can be filtered to display only the most relevant commands for a particular issue, or to provide greater visibility of a more targeted scope within the infrastructure. There are three ways to filter the table, which can be used in tandem to refine the information presented. Groupings Groupings are hierarchical organizations of labels, allowing users to organize their infrastructure views in a logical hierarchy. Users can switch between pre-configured groupings via the Browse By menu, or configure custom groupings, and then dive deeper into the infrastructure. For more information about groupings, refer to the Configure Groupings in Sysdig Secure documentation Time Navigation Use the time window navigation bar to show only activities run within that window. (For more information, see also Time Windows.) Note Sysdig Secure does not currently provide the functionality to configure a custom time window. Search Filters Search filters can be applied by either using the search bar directly or by adding pre-configured search strings via the Command Details panel. The search bar example below displays only table items that include apt-get: To use a pre-configured search string: From the Commands Auditmodule, select a command from the table to open the Command Detailswindow. Add a filter by click the Addlink beside one of the available options: The example below shows the table filtered by the working directory: Pre-configured filters exist for the following information: Command Working Directory Process ID Parent Process ID User ID Shell ID Shell Distance Note Search filters can be deleted by either deleting the text in the search bar or clicking the Remove link beside the filter in the Command Details window.
https://docs.sysdig.com/en/-legacy--commands-audit.html
2020-07-02T22:33:02
CC-MAIN-2020-29
1593655880243.25
[]
docs.sysdig.com
Usage # Installation with CocoaPods CocoaPods is a dependency manager for Objective-C, which automates and simplifies the process of using 3rd-party libraries in your projects. You can install it with the following command: $ gem install cocoapods # Podfile To integrate CDNByeSDK into your Xcode project using CocoaPods, specify it in your Podfile: source '' platform :ios, '10.0' target 'TargetName' do # Uncomment the next line if you're using Swift # use_frameworks! pod 'CDNByeSDK' end Then, run the following command: $ pod install If can not find CDNByeSDK in repo, run command: $ pod repo update Update SDK if needed: pod update CDNByeSDK --verbose --no-repo-update # Integration In order to allow the loading of distributed content via the local proxy, enable loading data from HTTP in your app by opening your info.plist file as source code and adding the following values below the tag: <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key> <true/> </dict> # Include Import CDNByeSDK in AppDelegate.m: #import <CDNByeKit/CBP2pEngine.h> If you want to use CDNByeSDK in your Swift app, then you need to create a bridging header that allows your Swift code to work with it. # Initialize CBP2pEngine Initialize CBP2pEngine in AppDelegate.m: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { [[CBP2pEngine sharedInstance] startWithToken:YOUR_TOKEN andP2pConfig:nil]; return YES; } func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { CBP2pEngine.sharedInstance().start(token: YOUR_TOKEN, p2pConfig: nil) return true } Where YOUR_TOKEN is your Customer ID. Please replace it by your own token obtained from console, click here for more information. # Usage When initializing an AVPlayer (or any other video player) instance, before passing it a URL, pass that URL through CDNBye P2P Engine: NSURL *originalUrl = [NSURL URLWithString:@""]; NSURL *parsedUrl = [[CBP2pEngine sharedInstance] parseStreamURL:originalUrl]; _player = [[AVPlayer alloc] initWithURL:parsedUrl]; let orginalUrl = URL.init(string: "") let parsedUrl = CBP2pEngine.sharedInstance().parse(streamURL: orginalUrl!) _player = AVPlayer.init(url: parsedUrl) That’s it! CDNBye should now be integrated into your app. # Demo A completed example can be found here
http://docs.cdnbye.com/en/views/ios/usage.html
2020-07-02T21:31:45
CC-MAIN-2020-29
1593655880243.25
[]
docs.cdnbye.com
What to do before selling or giving away your iPhone, iPad, or iPod touch If the iPad was running iOS 7, iCloud: Find My iPhone Activation Lock in iOS 7 iCloud: Activation Lock Find My iPhone Activation Lock: Removing a device from a previous owner’s account Buying or Selling a Used iPhone or iPad Running iOS 7? Read This First!- this-first/
http://docs.gz.ro/node/213
2020-07-02T23:02:09
CC-MAIN-2020-29
1593655880243.25
[array(['http://docs.gz.ro/sites/default/files/styles/thumbnail/public/pictures/picture-1-1324065756.jpg?itok=rS4jtWxd', "root's picture root's picture"], dtype=object) ]
docs.gz.ro
Your access to and use of the documentation located on this site is subject to the following terms and conditions and all applicable laws. By accessing and using this documentation, you accept the following terms and conditions, without limitation or qualification. Unless otherwise stated, the contents of this site including, but not limited to, the text and images contained herein and their arrangement are the property of Sysdig, Inc.. All trademarks used or referred to in this website are the property of their respective owners. Nothing contained in this site shall be construed as conferring, by implication or otherwise, any license or right to any copyright, patent, trademark or other proprietary interest of Sysdig or any third party. This site and the content provided in this site, including, but not limited to, graphic images, audio, video, html code, buttons, and text, may not be copied, reproduced, republished, uploaded, posted, transmitted, or distributed in any way, without the prior written consent of Sysdig, Sysdig. Links on this site may lead to services or sites not operated by Sysdig. No judgment or warranty is made with respect to such other services or sites and Sysdig Sysdig makes no representation or warranty of any kind with respect to the documentation, any site or service accessible through this site. Sysdig expressly disclaims all express and implied warranties including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, title, and non-infringement. In no event will Sysdig documentation, any content on or accessed through this documentation or any site service linked to, or any copying, displaying, or use thereof. Sysdig maintains this site in California, U.S.A. and. Sysdig does not accept unauthorized idea submissions outside of established business relationships. To protect the interests of our current clients and ourselves, we must treat the issue of such submissions with great care. Importantly, without a clear business relationship, Sysdig cannot and does not treat any such submissions in confidence. Accordingly, please do not communicate unauthorized idea submissions to Sysdig through this website. Any ideas disclosed to Sysdig outside a pre-existing and documented confidential business relationship are not confidential and Sysdig may therefore develop, use and freely disclose or publish similar ideas without compensating you or accounting to you. Sysdig will make every reasonable effort to return or destroy any unauthorized idea submissions without detailed review of them. However, if a review is necessary in Sysdig’s sole discretion, it will be with the understanding that Sysdig assumes no obligation to protect the confidentiality of your idea or compensate you for its disclosure or use. By submitting an idea or other detailed submission to Sysdig through this website, you agree to be bound by the terms of this stated policy.
https://docs.sysdig.com/en/terms-of-use.html
2020-07-02T22:12:29
CC-MAIN-2020-29
1593655880243.25
[]
docs.sysdig.com
This topic provides information about the various ports used by BMC ProactiveNet. You can configure some of these ports during the installation of BMC ProactaiveNet Server and some of these ports can be configured only after the installation of BMC ProactiveNet. For more information about ports that can be configured during installation, see Ports that can be configured during installation. For more information about ports that can be configured after installation, see Ports that can be configured after installation. For more information about updating the port numbers, see Updating port numbers. Certain BMC ProactiveNet Server ports are used for communication between processes. Some ports are used by processes running on the server only; these are internal and must not be accessed by other computers in the network (Event server). For security reasons, BMC Software recommends that all internal ports be made accessible only via the loopback address (127.0.0.1). By default, ports that are not required by external computers are secured, that is, the properties associated with the ports are set to the loopback address. To make the BMC ProactiveNet Server accessible to other computers in a network, certain ports on the server must be made available. From a multi-homed computer, BMC ProactiveNet Server processes can be accessed using any of the available IP addresses. The ports that the BMC ProactiveNet Server uses are bidirectional by default. That is, the TCP/IP ports transmit and receive data and can be used to perform read and write operations. Note Copy the values from installationDirectory/pw/pronto/conf/pronet.conf to installationDirectory/pw/custom/conf/pronet.conf and make changes in the custom/pronet.conf file to retain the changes for upgrades.
https://docs.bmc.com/docs/display/public/proactivenet95/BMC+ProactiveNet+ports
2020-07-02T23:04:26
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
Setup Cloud Foundry on Spinnaker¶ Overview¶ - Cloud Foundry makes it faster and easier to build, test, deploy and scale applications, providing a choice of clouds, developer frameworks, and application services. It is an open source project and is available through a variety of private cloud distributions and public cloud instances. - Cloud Foundry is used in companies where micro services are going cloud-native, to innovate and deliver a product with quality and elegance. Prerequisites¶ - Your CF foundations’ API endpoints must be reachable from your installation of Spinnaker. Add an Account¶ - While the Cloud Foundry provider is in alpha, the hal CLI does not have support for adding a CF account (this support will be added soon). Instead, you can use Halyard’s custom configuration to add a CF account to an existing installation of Spinnaker. - On the machine running Halyard, Halyard creates a .hal directory. - It contains a subdirectory for your Spinnaker deployment; by default, this subdirectory is called default. - The deployment subdirectory itself contains a profiles subdirectory. - Change to this subdirectory (an example path might be something like ~/.hal/default/profiles/) and within it, create the two files shown below. - Create a file called settings-local.js, with the following contents: window.spinnakerSettings.providers.cloudfoundry = { defaults: {account:'my-cloudfoundry-account'} }; - This file tells Spinnaker’s Deck microservice to load functionality supporting CF. - Create another file called clouddriver-local.yml, modifying the contents to include the relevant CF credentials: cloudfoundry: enabled: true accounts: - name: account-name user: 'account-user' password: 'account-password' api: api.foundation.path - name: optional-second-account api: api.optional.second.foundation.path user: 'second-account-user' password: 'second-account-password' - This file gives Spinnaker account information with which to reach your CF instance. - If you are setting up a new installation of Spinnaker, proceed to “Next steps” below. - If you are working with an existing installation of Spinnaker, apply your changes: $ hal deploy apply - Within a few minutes after applying your changes, you should be able to view the CF instance’s existing applications from your installation of Spinnaker. Next steps¶ - Optionally, you can set up another cloud provider, but otherwise you’re ready to choose an environment in which to install Spinnaker.
https://docs.opsmx.com/setup-spinnaker/cloud-providers/cloud-foundry/
2020-07-02T22:52:08
CC-MAIN-2020-29
1593655880243.25
[]
docs.opsmx.com
Tools to improve conversion¶ These are some ready to use items, including dynamic widgets, custom pages and images of all sizes that will help your consumers to be aware that financing is available and to be able to help you convert more. Notice you will need to login into your account as some elements are present inside your account only, and some others will require your token ID to be customized for your site specifically. Monthly Payment Estimator¶ Inside your portal account, under “Integration” you will find a ready to use, copy & paste tool that will add the monthly price of each item on the page, based on the customer’s FICO score and your current active lenders. This tool is designed to fit well in your website and to allow your customers to buy more by showing their maximum purchase power according to their FICO score, and to know how much they will have to pay every month. This will increase your conversions. Example using Monthly Payment Estimator¶ This is how it looks like in the normal state on a participant merchant: And this is if the consumer clicks on the monthly price: This tool tries to be minimalistic (it will not display if a price cannot be financed), easily configurable and functional. We recommend adding this tool to all the pages with prices (category listings, the product page and at the checkout) so users can get an idea of their budget for monthly prices in each step. Sample code for the monthly payment estimator¶ Installing the monthly payment estimator is very easy, and it might be as simple as changing the css selector of where to find the price on your page. It can be customized to change the text, the color and different aspects, but in most cases customizing 1 line will be enough. Check inside your account to find your code. Custom Program Page¶ Login to your portal account and you will be able to access to your custom landing page, under the Integration section. This page explain the current programs you have enabled to your consumers. It is completely customized for your usage and helps consumers know before they apply if they will be able to GetFinancing for their purchase. This page has several features: - It is specific to your merchant, with the name of your site in several places on the page - Contains tools to quickly estimate the consumer’s maximum approval, depending on your active programs and their FICO - Allows them to simulate their best monthly payment, without actually going inside the process. - Contains basic information on how to apply for Financing. - And contains the Frequently Asked Questions consumers may have. Images for the checkout¶ These images are thought to be used before we show to the user the GetFinancing process. Usually it is shown together with other immediate methods such as credit card. They can also be used as a button. Example of using an image on the checkout of a participant merchant: SVG images (for any size)¶ You can use our svg images like any other image, and unlike other formats like png, these will look good in literally any size. All modern browser support this format, and they will keep the original colors. To make them bigger or smaller, just change its height. Just like with the rest of the images, we suggest that you link these to your custom program page, which will help consumers understand how they have to use GetFinancing to pay less every month. Available SVG images¶ Logo with a link to the landing page: Logo without any messaging. Useful for the checkout or for places with very little space. Logo with the checkout with messaging: Sample code for svg images¶ <!-- Link these to the landing page describing our partnership --> <a target="_blank" href=""> <img height="80px" src=""> </a> Webfonts (for any color & size)¶ Webfonts are icons that like svg images will look great on any size. The difference with svg images is that they will adapt more easily to your website, because they will inherit your current color schema. Example using webfonts¶ This is an example of how a webfont is used on a participant merchant. Next to the other payment methods, on a different line, slightly bigger and with a link, but fitting nicely with the existing layout. List of available webfonts¶ These are the different fonts you can use, in this case with our default color and a font-size of 40px (this is the height of the font, you just need to make sure the text is readable). Sample code to use webfonts¶ Finally this is a sample code, where the button links to a landing page, we import the css for the webfont, and add an icon with a size of 40px of height. <!-- Link these to the landing page describing our partnership --> <a target="_blank" href=""> <!-- Add our webfonts as a stylesheet--> <link rel="stylesheet" href=""> <!-- Add a span, with the class being the webfont you choose and the style you want (font-size, color, shadow) or remove it completely to inherit your current font style --> <span class="icon-monthly-payment-big" style="font-size:40px"> </span> </a>
https://docs.getfinancing.com/integration-items.html
2020-02-16T21:55:57
CC-MAIN-2020-10
1581875141430.58
[array(['_images/mpe1.png', '_images/mpe1.png'], dtype=object) array(['_images/mpe2.png', '_images/mpe2.png'], dtype=object) array(['_images/webfont.png', '_images/webfont.png'], dtype=object)]
docs.getfinancing.com
All content with label 2lcache+amazon+aws+docs+gridfs+gui_demo+import+index+infinispan+installation+interceptor+jcache+jsr-107+locking+repeatable_read+schema+webdav. Related Labels: expiration, publish, datagrid, coherence, server, replication, transactionmanager, dist, release, query, lock_striping, jbossas, nexus, guide, listener, cache, s3, grid, api, xsd, ehcache, maven, documentation, wcm, userguide, write_behind, ec2, 缓存, s, hibernate, getting, interface, custom_interceptor, clustering, setup, eviction, concurrency, out_of_memory, examples, jboss_cache, events, batch, configuration, hash_function, buddy_replication, loader, write_through, cloud, mvcc, notification, tutorial, read_committed, jbosscache3x, xml, distribution, meeting, started, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, permission, transaction, async, interactive, xaresource, build, gatein, searchable, demo, scala, ispn, client, non-blocking, migration, filesystem, jpa, tx, user_guide, eventing, client_server, testng, infinispan_user_guide, standalone, hotrod, snapshot, consistent_hash, batching, store, jta, faq, as5, docbook, jgroups, lucene, rest, hot_rod more » ( - 2lcache, - amazon, - aws, - docs, - gridfs, - gui_demo, - import, - index, - infinispan, - installation, - interceptor, - jcache, - jsr-107, - locking, - repeatable_read, - schema, - webdav ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/2lcache+amazon+aws+docs+gridfs+gui_demo+import+index+infinispan+installation+interceptor+jcache+jsr-107+locking+repeatable_read+schema+webdav
2020-02-16T22:39:46
CC-MAIN-2020-10
1581875141430.58
[]
docs.jboss.org
Run a test failover (disaster recovery drill) to Azure This article describes how to run a disaster recovery drill to Azure, using a Site Recovery test failover. You run a test failover to validate your replication and disaster recovery strategy, without any data loss or downtime. A test failover doesn't impact ongoing replication, or your production environment. You can run a test failover on a specific virtual machine (VM), or on a recovery plan containing multiple VMs. Run a test failover This procedure describes how to run a test failover for a recovery plan. If you want to run a test failover for a single VM, follow the steps described here In Site Recovery in the Azure portal, click Recovery Plans > recoveryplan_name > Test Failover. Select a Recovery Point to which to fail over. You can use one of the following options: - Latest processed: This option fails over all VMs in the plan to the latest recovery point processed by Site Recovery. To see the latest recovery point for a specific VM, check Latest Recovery Points in the VM settings. This option provides a low RTO (Recovery Time Objective), because no time is spent processing unprocessed data. - Latest app-consistent: This option fails over all the VMs in the plan to the latest application-consistent recovery point processed by Site Recovery. To see the latest recovery point for a specific VM, check Latest Recovery Points in the VM settings. - Latest: This option first processes all the data that has been sent to Site Recovery service, to create a recovery point for each VM before failing over to it. This option provides the lowest RPO (Recovery Point Objective), because the VM created after failover will have all the data replicated to Site Recovery when the failover was triggered. - Latest multi-VM processed: This option is available for recovery plans with one or more VMs that have multi-VM consistency enabled. VMs with the setting enabled fail over to the latest common multi-VM consistent recovery point. Other VMs fail over to the latest processed recovery point. - Latest multi-VM app-consistent: This option is available for recovery plans with one or more VMs that have multi-VM consistency enabled. VMs that are part of a replication group fail over to the latest common multi-VM application-consistent recovery point. Other VMs fail over to their latest application-consistent recovery point. - Custom: Use this option to fail over a specific VM to a particular recovery point. Select an Azure virtual network in which test VMs will be created. - Site Recovery attempts to create test VMs in a subnet with the same name and same IP address as that provided in the Compute and Network settings of the VM. - If a subnet with the same name isn't available in the Azure virtual network used for test failover, then the test VM is created in the first subnet alphabetically. - If same IP address isn't available in the subnet, then the VM receives another available IP address in the subnet. Learn more. If you're failing over to Azure and data encryption is enabled, in Encryption Key, select the certificate that was issued when you enabled encryption during Provider installation. You can ignore this step if encryption isn't enabled. Track failover progress on the Jobs tab. You should be able to see the test replica machine in the Azure portal. To initiate an RDP connection to the Azure VM, you need to add a public IP address on the network interface of the failed over VM. When everything is working as expected, click Cleanup test failover. This deletes the VMs that were created during test failover. In Notes, record and save any observations associated with the test failover. When a test failover is triggered, the following occurs: - Prerequisites: A prerequisites check runs to make sure that all conditions required for failover are met. - Failover: The failover processes and prepared the data, so that an Azure VM can be created from it. - Latest: If you have chosen the latest recovery point, a recovery point is created from the data that's been sent to the service. - Start: This step creates an Azure virtual machine using the data processed in the previous step. Failover timing In the following scenarios, failover requires an extra intermediate step that usually takes around 8 to 10 minutes to complete: - VMware VMs running a version of the Mobility service older than 9.8 - Physical servers - VMware Linux VMs - Hyper-V VM protected as physical servers - VMware VM where the following drivers aren't boot drivers: - storvsc - vmbus - storflt - intelide - atapi - VMware VM that don't have DHCP enabled , irrespective of whether they are using DHCP or static IP addresses. In all the other cases, no intermediate step is not required, and failover takes significantly less time. Create a network for test failover. - Update the DNS of the test network with the IP address specified for the DNS VM in Compute and Network settings. Read test failover considerations for Active Directory for more details. Test failover to a production network in the recovery site Although we recommended that you use a test network separate from your production network, if you do want to test a disaster recovery drill into your production network, note the following: - Make sure that the primary VM is shut down when you run the test failover. Otherwise there will be two VMs with the same identity, running in the same network at the same time. This can lead to unexpected consequences. - Any changes to VMs created for test failover are lost when you clean up the failover. These changes are not replicated back to the primary VM. - Testing in your production environment leads to a downtime of your production application. Users shouldn't use apps running on VMs when the test failover is in progress. Prepare Active Directory and DNS To run a test failover for application testing, you need a copy of your production Active Directory environment in your test environment. Read test failover considerations for Active Directory to learn more. Prepare to connect to Azure VMs after failover If you want to connect to Azure VMs using RDP/SSH after failover, follow the requirements summarized in the table. Follow the steps described here to troubleshoot any connectivity issues post failover. Next steps After you've completed a disaster recovery drill, learn more about other types of failover. Feedback
https://docs.microsoft.com/en-in/azure/site-recovery/site-recovery-test-failover-to-azure
2020-02-16T22:52:12
CC-MAIN-2020-10
1581875141430.58
[array(['media/site-recovery-test-failover-to-azure/testfailover.png', 'Test Failover'], dtype=object) array(['media/site-recovery-test-failover-to-azure/testfailoverjob.png', 'Test Failover'], dtype=object) ]
docs.microsoft.com
Implement background tasks in microservices with IHostedService and the BackgroundService class. From a generic point of view, in .NET Core we called these type of tasks Hosted Services, because they are services/logic that you host within your host/application/microservice. Note that in this case, the hosted service simply means a class with the background task logic. Since .NET Core 2.0, the framework provides a new interface named IHostedService helping you to easily implement hosted services. The basic idea is that you can register multiple background tasks (hosted services) that run in the background while your web host or host is running, as shown in the image 6-26. Figure 6-26. Using IHostedService in a WebHost vs. a Host ASP.NET Core 1.x and 2.x support IWebHost for background processes in web apps. .NET Core 2.1 supports IHost for background processes with plain console apps. Note the difference made between WebHost and Host. A WebHost (base class implementing IWebHost) in ASP.NET Core 2.0 is the infrastructure artifact you use to provide HTTP server features to your process, such as if you are implementing an MVC web app or Web API service. It provides all the new infrastructure goodness in ASP.NET Core, enabling you to use dependency injection, insert middlewares in the request pipeline, etc. and precisely use these IHostedServices for background tasks. A Host (base class implementing IHost) was introduced in .NET Core 2.1. Basically, a Host allows you to have a similar infrastructure than what you have with WebHost (dependency injection, hosted services, etc.), but in this case, you just want to have a simple and lighter process as the host, with nothing related to MVC, Web API or HTTP server features. Therefore, you can choose and either create a specialized host-process with IHost to handle the hosted services and nothing else, such a microservice made just for hosting the IHostedServices, or you can alternatively extend an existing ASP.NET Core WebHost, such as an existing ASP.NET Core Web API or MVC app. Each approach has pros and cons depending on your business and scalability needs. The bottom line is basically that if your background tasks have nothing to do with HTTP (IWebHost) you should use IHost. Registering hosted services in your WebHost or Host Let’s drill down further on the IHostedService interface since its usage is pretty similar in a WebHost or in a Host. SignalR is one example of an artifact using hosted services, but you can also use it for much simpler things like: - A background task polling a database looking for changes. - A scheduled task updating some cache periodically. - An implementation of QueueBackgroundWorkItem that allows a task to be executed on a background thread. - Processing messages from a message queue in the background of a web app while sharing common services such as ILogger. - A background task started with Task.Run(). You can basically offload any of those actions to a background task based on IHostedService. The way you add one or multiple IHostedServices into your WebHost or Host is by registering them up through the AddHostedService extension method in an ASP.NET Core WebHost (or in a Host in .NET Core 2.1 and above). Basically, you have to register the hosted services within the familiar ConfigureServices() method of the Startup class, as in the following code from a typical ASP.NET WebHost. public IServiceProvider ConfigureServices(IServiceCollection services) { //Other DI registrations; // Register Hosted Services services.AddHostedService<GracePeriodManagerService>(); services.AddHostedService<MyHostedServiceB>(); services.AddHostedService<MyHostedServiceC>(); //... } In that code, the GracePeriodManagerService hosted service is real code from the Ordering business microservice in eShopOnContainers, while the other two are just two additional samples. The IHostedService background task execution is coordinated with the lifetime of the application (host or microservice, for that matter). You register tasks when the application starts and you have the opportunity to do some graceful action or clean-up when the application is shutting down. Without using IHostedService, you could always start a background thread to run any task. The difference is precisely at the app’s shutdown time when that thread would simply be killed without having the opportunity to run graceful clean-up actions. The IHostedService interface When you register an IHostedService, .NET Core will call the StartAsync() and StopAsync() methods of your IHostedService type during application start and stop respectively. Specifically, start is called after the server has started and IApplicationLifetime.ApplicationStarted is triggered. The IHostedService as defined in .NET Core, looks like the following. namespace Microsoft.Extensions.Hosting { // // Summary: // Defines methods for objects that are managed by the host. public interface IHostedService { // // Summary: // Triggered when the application host is ready to start the service. Task StartAsync(CancellationToken cancellationToken); // // Summary: // Triggered when the application host is performing a graceful shutdown. Task StopAsync(CancellationToken cancellationToken); } } As you can imagine, you can create multiple implementations of IHostedService and register them at the ConfigureService() method into the DI container, as shown previously. All those hosted services will be started and stopped along with the application/microservice. As a developer, you are responsible for handling the stopping action of your services when StopAsync() method is triggered by the host. Implementing IHostedService with a custom hosted service class deriving from the BackgroundService base class You could go ahead and create your custom hosted service class from scratch and implement the IHostedService, as you need to do when using .NET Core 2.0. However, since most background tasks will have similar needs in regard to the cancellation tokens management and other typical operations, there is a convenient abstract base class you can derive from, named BackgroundService (available since .NET Core 2.1). That class provides the main work needed to set up the background task. The next code is the abstract BackgroundService base class as implemented in .NET Core. // Copyright (c) .NET Foundation. Licensed under the Apache License, Version 2.0. /// <summary> /// Base class for implementing a long running <see cref="IHostedService"/>. /// </summary> public abstract class BackgroundService : IHostedService, IDisposable { private Task _executingTask; private readonly CancellationTokenSource _stoppingCts = new CancellationTokenSource(); protected abstract Task ExecuteAsync(CancellationToken stoppingToken);)); } } public virtual void Dispose() { _stoppingCts.Cancel(); } } When deriving from the previous abstract base class, thanks to that inherited implementation, you just need to implement the ExecuteAsync() method in your own custom hosted service class, as in the following simplified code from eShopOnContainers which is polling a database and publishing integration events into the Event Bus when needed. public class GracePeriodManagerService : BackgroundService { private readonly ILogger<GracePeriodManagerService> _logger; private readonly OrderingBackgroundSettings _settings; private readonly IEventBus _eventBus; public GracePeriodManagerService(IOptions<OrderingBackgroundSettings> settings, IEventBus eventBus, ILogger<GracePeriodManagerService> logger) { //Constructor’s parameters validations... } protected override async Task ExecuteAsync(CancellationToken stoppingToken) { _logger.LogDebug($"GracePeriodManagerService is starting."); stoppingToken.Register(() => _logger.LogDebug($" GracePeriod background task is stopping.")); while (!stoppingToken.IsCancellationRequested) { _logger.LogDebug($"GracePeriod task doing background work."); // This eShopOnContainers method is querying a database table // and publishing events into the Event Bus (RabbitMQ / ServiceBus) CheckConfirmedGracePeriodOrders(); await Task.Delay(_settings.CheckUpdateTime, stoppingToken); } _logger.LogDebug($"GracePeriod background task is stopping."); } .../... } In this specific case for eShopOnContainers, it's executing an application method that's querying a database table looking for orders with a specific state and when applying changes, it is publishing integration events through the event bus (underneath it can be using RabbitMQ or Azure Service Bus). Of course, you could run any other business background task, instead. By default, the cancellation token is set with a 5 second timeout, although you can change that value when building your WebHost using the UseShutdownTimeout extension of the IWebHostBuilder. This means that our service is expected to cancel within 5 seconds otherwise it will be more abruptly killed. The following code would be changing that time to 10 seconds. WebHost.CreateDefaultBuilder(args) .UseShutdownTimeout(TimeSpan.FromSeconds(10)) ... Summary class diagram The following image shows a visual summary of the classes and interfaces involved when implementing IHostedServices. Figure 6-27. Class diagram showing the multiple classes and interfaces related to IHostedService Class diagram: IWebHost and IHost can host many services, which inherit from BackgroundService, which implements IHostedService. Deployment considerations and takeaways It is important to note that the way you deploy your ASP.NET Core WebHost or .NET Core Host might impact the final solution. For instance, if you deploy your WebHost on IIS or a regular Azure App Service, your host can be shut down because of app pool recycles. But if you are deploying your host as a container into an orchestrator like Kubernetes or Service Fabric, you can control the assured number of live instances of your host. In addition, you could consider other approaches in the cloud especially made for these scenarios, like Azure Functions. Finally, if you need the service to be running all the time and are deploying on a Windows Server you could use a Windows Service. But even for a WebHost deployed into an app pool, there are scenarios like repopulating or flushing application’s in-memory cache that would be still applicable. The IHostedService interface provides a convenient way to start background tasks in an ASP.NET Core web application (in .NET Core 2.0) or in any process/host (starting in .NET Core 2.1 with IHost). Its main benefit is the opportunity you get with the graceful cancellation to clean-up code of your background tasks when the host itself is shutting down. Additional resources Building a scheduled task in ASP.NET Core/Standard 2.0 Implementing IHostedService in ASP.NET Core 2.0 GenericHost Sample using ASP.NET Core 2.1 Feedback
https://docs.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice
2020-02-16T22:47:00
CC-MAIN-2020-10
1581875141430.58
[array(['media/background-tasks-with-ihostedservice/ihosted-service-webhost-vs-host.png', 'Diagram comparing ASP.NET Core IWebHost and .NET Core IHost.'], dtype=object) array(['media/background-tasks-with-ihostedservice/class-diagram-custom-ihostedservice.png', 'Diagram showing that IWebHost and IHost can host many services.'], dtype=object) ]
docs.microsoft.com
6.8 Tab-delimited Text InqScribe can import and export tab-delimited text files. This is a convenient format for bringing transcript data into Excel or some other spreadsheet-like application. You can find general guidance for importing and exporting data elsewhere. 6.8.1 Importing Tab-delimited Text You can import a series of records in tab-delimited format. Tab-delimited format means that each line of the file corresponds to a single record, and fields within a record are separated by tab characters (e.g. ASCII 9). Note: Records can be separated by any common end of line character. You can use CR (Macintosh default), CR/LF (Windows), or LF (Unix); InqScribe will handle them all just fine. A record is a combination of a timecode and related text that is associated with that timecode. This is useful if you maintain a database of such records in another application. InqScribe expects the tab-delimited data to have from two to four fields: - Start timecode. If this field is blank, InqScribe will ignore it. - End timecode. Optional. If this field is blank, InqScribe will ignore it. - Speaker name. Optional, but if present, end timecode must be present too. If this field is blank, InqScribe will ignore it. - Transcript. This field cannot contain end of line characters, since those are used to separate records. If your transcript contains multiple lines, use a vertical tab character (ASCII 11) to separate the lines. (When InqScribe exports data records, it uses vertical tabs in this way, as does FileMaker Pro.) Given the various optional fields, this means your tab-delimited file should match one of the following field orderings. - Start time, related text - Start time, end time, transcript - Start time, end time, speaker name, transcript Here's a brief example. If your import file looks like this (note that "\t" refers to a single tab character, e.g. ASCII 9): 00:00:00\tThe first line. 00:05:00\tThe second line. Then your transcript will look like: [00:00:00.00] The first line. [00:05:00.00] The second line. 6.8.2 Exporting Tab-delimited Text Exporting tab-delimited text creates a series of "records" as described on the exporting overview page. Each record consists of the following fields: - Start time/IN - End time/OUT (optional) - Duration (optional) - Speaker name (optional) - Transcript Fields are separated by tab characters; records are separated by end of line characters appropriate to your OS. If there are return characters within a field, they are converted to vertical tab characters (ASCII 11). Tabs within a field are converted to spaces. Please note that InqScribe will export a header row at the beginning of the file with the names of each field.
http://docs.inqscribe.com/2.2/format_tab.html
2020-02-16T21:29:56
CC-MAIN-2020-10
1581875141430.58
[]
docs.inqscribe.com
Crate rustc_ap_rustc_session See all rustc_ap_rustc_session's items Some facilities for tracking how codegen-units are reused during incremental compilation. This is used for incremental compilation tests and debug output. Contains infrastructure for configuring the compiler, including parsing command-line options. Contains ParseSess which holds state living beyond what one Parser might. It also serves as an input to the parser itself. ParseSess Parser Declares a static item of type &'static Lint. &'static Lint Declares a type named $name which implements LintPass. To the right of => a comma separated list of Lint statics is given. $name LintPass => Lint Implements LintPass for $name with the given list of Lint statics. LintPass for $name Declares a static LintArray and return it as an expression. LintArray Hash value constructed out of all the -C metadata arguments passed to the compiler. Together with the crate-name forms a unique global identifier for the crate. -C metadata Represents the data associated with a compilation session for a single crate. Diagnostic message ID, used by Session.one_time_diagnostics to avoid emitting the same message more than once. Session.one_time_diagnostics Holds data on the current incremental compilation session, if there is one.
https://docs.rs/rustc-ap-rustc_session/638.0.0/rustc_ap_rustc_session/
2020-02-16T22:22:21
CC-MAIN-2020-10
1581875141430.58
[]
docs.rs
Permission Sets The Clever EDI app install will create two permission sets, one is Clever EDI and the other is for the dependency app Inbound Documents. It’s recommended both permission sets are assigned to those using the Clever EDI areas so they can accept/reject and make changes to EDI documents. The Clever Config permission set should be given to all users.
http://docs.cleverdynamics.com/Clever%20EDI/User%20Guide/Permission%20Sets%20EDI/
2020-02-16T21:16:52
CC-MAIN-2020-10
1581875141430.58
[array(['../media/e5feb4ad4950e75c6737aa22d769047f.png', None], dtype=object) ]
docs.cleverdynamics.com
Getting Started - Configuring an Android (Google Play) app - Package name - License Key - Service account credentials - Real-time developer notifications Package name The Package name uniquely identifies your app in the Play Store. To find your Package name, first navigate and login to the Google Play Developer Publisher Console, then navigate to All applications. You will find your Package name in the list of applications beneath the app’s name. License key The license key is used to verify the purchase signature when a new purchase is made. To find your License key, first navigate and login to the Google Play Developer Publisher Console, then navigate to All applications and select your app. Click Development tools on the left, then click Services & APIs. You will find your License key in the Licensing & in-app billing section. Service account credentials The service account credentials is a JSON file containing a private key used for authenticating with the Google Play Developer API. To create a service account, first navigate and login to the Google Play Developer Publisher Console - API access page. You will need a linked Google Play Android Developer project, if you don’t already have a linked project, click the Create new project button. Next step is to create a service account, click the Create service account button and read the instructions carefully. It will guide you to navigate to the Google API Console to create your service account there. Don’t click Done until you have created your service account in the Google API Console. Once in the Google API Console click on Create service account. Give the service account a descriptive name, we recommend naming it Mbaasy. Select Owner from Project role > Project > Owner. Select Furnish a new private key and ensure JSON is selected, then click Save. A .json file will automatically download, keep this file safe as it contains your private key. Once downloaded, return to your Google Play Console tab, now you can click Done. You will now see your new service account in the Service accounts section. Click on the Grant access button and a new page will open. Select Finance from Role and ensure both View app information and View financial data are checked. Click Add user and you’re done. Real-time developer notifications Google Play can send real-time developer notifications when subscription entitlements change. Every Google Play App registered with Mbaasy come furnished with a unique Google Cloud Pub/Sub Topic, making the process as simple as copying the Pub/Sub topic from the Mbaasy App Publisher Console > Apps > [App] > Settings > Play Store settings page and pasting it on the Google Play Developer Publisher Console > Apps > [App] > Development tools > Services & APIs page in the Real-time developer notifications section. This will ensure Mbaasy stays up-to-date with any changes made to your subscriptions.
https://docs.mbaasy.com/getting_started/google_play/
2020-02-16T21:43:08
CC-MAIN-2020-10
1581875141430.58
[array(['/assets/images/play_store/play_store_settings.jpg', 'Play Store Settings'], dtype=object) array(['/assets/images/play_store/license-key.jpg', 'Step 1'], dtype=object) array(['/assets/images/play_store/service-account-1.jpg', 'Step 1'], dtype=object) array(['/assets/images/play_store/service-account-2.jpg', 'Step 2'], dtype=object) array(['/assets/images/play_store/service-account-3.jpg', 'Step 3'], dtype=object) array(['/assets/images/play_store/service-account-4.jpg', 'Step 4'], dtype=object) array(['/assets/images/play_store/service-account-5.jpg', 'Step 5'], dtype=object) array(['/assets/images/play_store/service-account-6.jpg', 'Step 6'], dtype=object) array(['/assets/images/play_store/service-account-7.jpg', 'Step 7'], dtype=object) array(['/assets/images/play_store/real-time-developer-notifications.jpg', 'Real-time developer notifications'], dtype=object) ]
docs.mbaasy.com
Installing Windows 7 from a bootable USB memory stick During the beta testing of Windows Vista I used DVD-RW discs to burn daily builds every few days to put onto a 32-bit laptop and a 64-bit desktop machine, as there are some things you just can’t see in a virtual environment – I was impressed then that the clean system to desktop time was ~32 minutes thanks to the new “WIM” installation method. During Windows 7 beta testing, I decided to try out a bootable USB memory stick as the installation source – I was very impressed to see the clean installation time drop to ~15 minutes. Quick tip – don’t change your BIOS device boot order to put USB before HDD or you will get stuck in a boot loop at the first reboot until you unplug the USB device and restart again. Instead, many PCs have the option to hit a key during POST to select a one-time boot device – the first boot sequence prepares the partition you selected and copies over the entire source data to continue installation by booting from the HDD – after this boot you don’t need the installation media any more. Given the incredibly low cost of USB memory sticks, I have one with the 32-bit version and a separate one with the 64-bit version – I can use the extra storage for holding extra installers such as Windows Virtual PC & XP Mode, Windows Live Essentials, VPN and AV software, etc. It is much easier to maintain the images and software on a fast, small device than to re-burn an entire DVD image which is comparatively slow and subject to scratches (or in my case being borrowed and never returned, so I have to burn a new one). How to go about setting up a USB memory stick as a Windows installation source? Rather than reinvent the wheel, Jeff Alexander’s blog has a perfect step-by-step guide on how to prepare a USB memory bootable device for installs so I will just refer you there.
https://docs.microsoft.com/en-us/archive/blogs/mrsnrub/installing-windows-7-from-a-bootable-usb-memory-stick
2020-02-16T23:35:37
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
. Watch a short video about the admin center. If you found this video helpful, check out the complete training series for small businesses and those new to Microsoft 365. How to get to the admin center - Sign in to Create an Office 365 group Manage an Office 365 group See also Microsoft 365 Business training videos
https://docs.microsoft.com/sl-si/office365/admin/admin-overview/about-the-admin-center?view=o365-worldwide
2020-02-16T22:44:04
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
It's interesting (and annoying) how some framework still bring a lot of stuff we don't need, put them all together and demand a big learning curve. Or maybe you decide to use a framework that helps you to guide your architecture, but wait, you must walk straight to the line or just get out. Also, there are those which are really light, so light you and uncoupling that you start to think if you really need a framework. Understanding those scenarios, our approach is led by the following goal: To create a framework unnecessarily heavy, semantically intuitive, which can overcome recurrent steps on building process applications, without being too imperative, focusing on fast and scalable development.
https://docs.apptjs.com/
2020-02-16T22:10:44
CC-MAIN-2020-10
1581875141430.58
[]
docs.apptjs.com
- channel - direct - element - lighting - lightselect - probabilistic - rawdiffusefilter - rawtotallighting - time - vraynoiselevel To add a label to the list of required labels, choose '+ labelname' from Related Labels. To remove a label from the required labels, choose '- labelname' from above. - There are no pages at the moment.
https://docs.chaosgroup.com/label/VRAY4MAX/channel+direct+element+lighting+lightselect+probabilistic+rawdiffusefilter+rawtotallighting+time+vraynoiselevel
2020-02-16T22:36:28
CC-MAIN-2020-10
1581875141430.58
[]
docs.chaosgroup.com
- Popular CRM software features are set to evolve faster in 2019, than ever previously - Traditional ways of collecting customer data face challenges posed by new legislation such as GDPR - Changing social media trends may require customer relationship management to start switching emphasis from specific social media platforms. The Changing Face of Customer Relationship Management in 2019 The world of customer relationship management is changing. In 2017, a study by Forrester’s found that when implemented correctly, the right CRM tools can result in 245% ROI. In support of this claim, the CRM software market has experienced explosive growth over 2018. - The CRM software industry is forecast to reach over $80 billion in turnover by 2025 - 91% of businesses with 11 or more employees are already using customer relationship management software - 74% of enterprises report significant improvements in customer relations after investing in CRM software However, businesses and CRM software vendors will also face several unique challenges in 2019. GDPR & Customer Relations in 2019 In September 2018, UK digital marketing agency Everything DM LTD became the first digital marketing agency in Europe to be fined under the EU’s new general data protection regulation (GDPR). What Happened GDPR came into force on May 25th, 2018. The legislation prohibits companies from sending unsolicited communications to EU citizens in any way. Everything DM did not comply with GDPR. Specifically, by sending 1.42 million unsolicited emails in 2018, via their CRM platform. How are CRM Software Users Affected GDPR isn’t a problem for CRM software vendors per se. However, businesses using any customer relationship management software need to ensure GDPR compliance. - When gathering personally identifiable information such as email addresses, businesses need to prove that they have the explicit consent from individuals to receive future communications - Consumer information stored in marketing databases needs to be stored securely - Individuals must be able to request retrieval and removal of personal information at any time Changing Social Media Trends in 2019 As well as GDPR, CRM users in 2019 need to start taking note of the changing social media landscape. In countries like South Africa, instant messaging apps like WhatsApp have already usurped Facebook as the biggest social media platform by market share. For the most part, consumer preference for IM based apps over traditional social media platforms is being driven by millennials. Specifically, younger people who place a strong emphasis on instant 2-way communication with brands. - CRM based SMS marketing will see a resurgence in 2019,as more millennials seek to communicatedirectly with brands - More businesses will start evolving content marketingstrategies in 2019, to cater to WhatsApp and other IM users - Chatbots on social media platforms like Facebook willbecome more commonplace, as more consumers use traditional social mediaplatforms as customer service gateways Must-Have CRM Software Features for 2019 As well as instant 2-way communication with brands, millennials increasingly expect to be served dynamic web content. Thankfully, CRM solutions offered by the likes of Flexie already make this possible. Take Flexie for a test drive for just $1 for a no-obligation 14-day trial period. To stay updated with the latest features, news and how-to articles and videos, please join our group on Facebook, Flexie CRM Academy and subscribe to our YouTube channel Flexie CRM.
https://docs.flexie.io/docs/marketing-trends/customer-relationship-management-trends-to-watch-in-2019/
2020-02-16T23:42:09
CC-MAIN-2020-10
1581875141430.58
[]
docs.flexie.io
Autopilot – Even Easier Device Enrollment & Deployment In Windows 10 Out Of The Box: With Windows AutoPilot, IT professionals can customize the Out of Box Experience (OOBE). Some of the benefits of Microsoft Autopilot include: -AD–no product keys to manage, no reboots, no prompts for the user (Requires a Windows 10 Enterprise E3 subscription) Here is how you can set up the Autopilot program and see it in action:.
https://docs.microsoft.com/en-us/archive/blogs/nzedu/autopilot-even-easier-device-enrollment-deployment-in-windows-10-out-of-the-box
2020-02-16T23:51:26
CC-MAIN-2020-10
1581875141430.58
[array(['https://samuelmcneill.files.wordpress.com/2017/06/autopilot-4.png', 'Autopilot 4'], dtype=object) ]
docs.microsoft.com
panda3d.core.HTTPClient¶ - class HTTPClient¶ Bases: ReferenceCountGlobalPtr(). Inheritance diagram __init__(copy: HTTPClient) → None - static initRandomSeed() → None¶. setProxySpec(proxy_spec: str) → None¶Proxy()for each scheme/proxy pair. getProxySpec() → str¶. setDirectHostSpec(direct_host_spec: str) → None¶ Specifies the set of hosts that should be connected to directly, without using a proxy. This is a semicolon-separated list of hostnames that may contain wildcard characters (“*”). getDirectHostSpec() → str¶ Returns the set of hosts that should be connected to directly, without using a proxy, as a semicolon-separated list of hostnames that may contain wildcard characters (“*”). setTryAllDirect(try_all_direct: bool) → None¶. getTryAllDirect() → bool¶ Returns whether a failed connection through a proxy will be followed up by a direct connection attempt, false otherwise. clearProxy() → None¶ Resets the proxy spec to empty. Subsequent calls to addProxy()may be made to build up the set of proxy servers. addProxy(scheme: str, proxy: URLSpec) → None¶ Adds the indicated proxy host as a proxy for communications on the given scheme. Usually the scheme is “http” or “https”. It may be the empty string to indicate a general proxy. The proxy string may be the empty URL to indicate a direct connection. clearDirectHost() → None¶ Resets the set of direct hosts to empty. Subsequent calls to addDirectHost()may be made to build up the list of hosts that do not require a proxy connection. addDirectHost(hostname: str) → None¶ Adds the indicated name to the set of hostnames that are connected to directly, without using a proxy. This name may be either a DNS name or an IP address, and it may include the * as a wildcard character. getProxiesForUrl(url: URLSpec) → str¶ Returns a semicolon-delimited list of proxies, in the order in which they should be tried, that are appropriate for the indicated URL. The keyword DIRECT indicates a direct connection should be tried. setUsername(server: str, realm: str, username: str) → None¶. getUsername(server: str, realm: str) → str¶ Returns the username:password string set for this server/realm pair, or empty string if nothing has been set. See setUsername(). setCookie(cookie: HTTPCookie) → None¶ Stores the indicated cookie in the client’s list of cookies, as if it had been received from a server. clearCookie(cookie: HTTPCookie) → bool¶ Removes the cookie with the matching domain/path/name from the client’s list of cookies. Returns true if it was removed, false if the cookie was not matched. hasCookie(cookie: HTTPCookie) → bool¶ Returns true if there is a cookie in the client matching the given cookie’s domain/path/name, false otherwise. getCookie(cookie: HTTPCookie) → HTTPCookie¶ Looks up and returns the cookie in the client matching the given cookie’s domain/path/name. If there is no matching cookie, returns an empty cookie. - Return type - copyCookiesFrom(other: HTTPClient) → None¶ Copies all the cookies from the indicated HTTPClient into this one. Existing cookies in this client are not affected, unless they are shadowed by the new cookies. writeCookies(out: ostream) → None¶ Outputs the complete list of cookies stored on the client, for all domains, including the expired cookies (which will normally not be sent back to a host). sendCookies(out: ostream, url: URLSpec) → None¶ Writes to the indicated ostream a “Cookie” header line for sending the cookies appropriate to the indicated URL along with an HTTP request. This also removes expired cookies. setClientCertificateFilename(filename: Filename) → None¶ Sets the filename of the pem-formatted file that will be read for the client public and private keys if an SSL server requests a certificate. Either this or setClientCertificatePem()may be used to specify a client certificate. setClientCertificatePem(pem: str) → None¶ Sets the pem-formatted contents of the certificate that will be parsed for the client public and private keys if an SSL server requests a certificate. Either this or setClientCertificateFilename()may be used to specify a client certificate. setClientCertificatePassphrase(passphrase: str) → None¶ Sets the passphrase used to decrypt the private key in the certificate named by setClientCertificateFilename()or setClientCertificatePem(). loadClientCertificate() → bool¶. addPreapprovedServerCertificateFilename(url: URLSpec, filename: Filename) → bool¶Pem(), and the weaker addPreapprovedServerCertificateName(). addPreapprovedServerCertificatePem(url: URLSpec, pem: str) → bool¶ Adds the certificate defined in the indicated data string, formatted as a PEM block,Filename(), and the weaker addPreapprovedServerCertificateName(). addPreapprovedServerCertificateName(url: URLSpec, name: str) → bool¶ Adds the certificate name only, as a “pre-approved” certificate name for the indicated server, defined by the hostname and port (only) from the given URL. This is a weaker function than addPreapprovedServerCertificateFilename().=… clearPreapprovedServerCertificates(url: URLSpec) → None¶ Removes all preapproved server certificates for the indicated server and port. clearAllPreapprovedServerCertificates() → None¶ Removes all preapproved server certificates for all servers. setHttpVersion(version: HTTPVersion) → None¶ Specifies the version of HTTP that the client uses to identify itself to the server. The default is HV_11, or HTTP 1.0; you can set this to HV_10 (HTTP 1.0) to request the server use the older interface. getHttpVersion() → HTTPVersion¶ Returns the client’s current setting for HTTP version. See setHttpVersion(). - Return type HTTPVersion getHttpVersionString() → str¶ Returns the current HTTP version setting as a string, e.g. “HTTP/1.0” or “HTTP/1.1”. - static parseHttpVersionString(version: str) → HTTPVersion¶ Matches the string representing a particular HTTP version against any of the known versions and returns the appropriate enumerated value, or HV_other if the version is unknown. - Return type HTTPVersion loadCertificates(filename: Filename) → bool¶ Reads the certificate(s) (delimited by —–BEGIN CERTIFICATE—– and —–END CERTIFICATE—–) from the indicated file and makes them known as trusted public keys for validating future connections. Returns true on success, false otherwise. setVerifySsl(verify_ssl: VerifySSL) → None¶ Specifies whether the client will insist on verifying the identity of the servers it connects to via SSL (that is, https). The parameter value is an enumerated type which indicates the level of security to which the client will insist upon. getVerifySsl() → VerifySSL¶ Returns whether the client will insist on verifying the identity of the servers it connects to via SSL (that is, https). See setVerifySsl(). - Return type VerifySSL setCipherList(cipher_list: str) → None¶. getCipherList() → str¶ Returns the set of ciphers as set by setCipherList(). See setCipherList(). makeChannel(persistent_connection: bool) → HTTPChannel¶ Returns a new HTTPChannel object that may be used for reading multiple documents using the same connection, for greater network efficiency than calling HTTPClient.getDocument(. - Return type - postForm(url: URLSpec, body: str) → HTTPChannel¶ Posts form data to a particular URL and retrieves the response. Returns a new HTTPChannel object whether the document is successfully read or not; you can test is_valid() and get_return_code() to determine whether the document was retrieved. - Return type - getDocument(url: URLSpec) → HTTPChannel¶ Opens the named document for reading. Returns a new HTTPChannel object whether the document is successfully read or not; you can test is_valid() and get_return_code() to determine whether the document was retrieved. - Return type - getHeader(url: URLSpec) → HTTPChannel¶ Like getDocument(), except only the header associated with the document is retrieved. This may be used to test for existence of the document; it might also return the size of the document (if the server gives us this information). - Return type - - static base64Encode(s: str) → str¶ Implements HTTPAuthorization::base64_encode(). This is provided here just as a convenient place to publish it for access by the scripting language; C++ code should probably use HTTPAuthorization directly. - static base64Decode(s: str) → str¶ Implements HTTPAuthorization::base64_decode(). This is provided here just as a convenient place to publish it for access by the scripting language; C++ code should probably use HTTPAuthorization directly.
https://docs.panda3d.org/1.10/cpp/reference/panda3d.core.HTTPClient
2020-02-16T21:39:36
CC-MAIN-2020-10
1581875141430.58
[]
docs.panda3d.org
AWS Directory Service for Microsoft Active Directory allows you to configure and verify trust relationships. This action verifies a trust relationship between your AWS Managed Microsoft AD directory and an external domain. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. verify-trust --trust-id <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --trust-id (string) The unique Trust ID of the trust relationship to.
https://docs.aws.amazon.com/cli/latest/reference/ds/verify-trust.html
2020-02-16T22:14:06
CC-MAIN-2020-10
1581875141430.58
[]
docs.aws.amazon.com
Storage Server Donations¶ The following is a configuration convention which allows users to anonymously support the operators of storage servers. Donations are made using Zcash shielded transactions to limit the amount of personal information incidentally conveyed. Sending Donations¶ To support a storage server following this convention, you need several things: - a Zcash wallet capable of sending shielded transactions (at least until Zcash 1.1.1 this requires a Zcash full node) - a shielded address with sufficient balance - a running Tahoe-LAFS client node which knows about the recipient storage server For additional protection, you may also wish to operate your Zcash wallet and full node using Tor. Find Zcash Shielded Address¶ To find an address at which a storage server operator wishes to receive donations, launch the Tahoe-LAFS web UI: $ tahoe webopen Inspect the page for the storage server area. This will have a heading like Connected to N of M known storage servers. Each storage server in this section will have a nickname. A storage server with a nickname beginning with zcash: is signaling it accepts Zcash donations. Copy the full address following the zcash: prefix and save it for the next step. This is the donation address. Donation addresses beginning with z are shielded. It is recommended that all donations be sent from and to shielded addresses. Send the Donation¶ First, select a donation amount. Next, use a Zcash wallet to send the selected amount to the donation address. Using the Zcash cli wallet, this can be done with commands like: $ DONATION_ADDRESS="..." $ AMOUNT="..." $ YOUR_ADDRESS="..." $ zcash-cli z_sendmany $YOUR_ADDRESS "[{\"address\": \"$DONATION_ADDRESS\", \"amount\": $AMOUNT}]" Remember that you must also have funds to pay the transaction fee (which defaults to 0.0001 ZEC in mid-2018). Receiving Donations¶ To receive donations from users following this convention, you need the following: - a Zcash shielded address Further Reading¶ To acquaint yourself with the security and privacy properties of Zcash, refer to the Zcash documentation.
https://tahoe-lafs.readthedocs.io/en/latest/accepting-donations.html
2020-02-16T21:16:33
CC-MAIN-2020-10
1581875141430.58
[]
tahoe-lafs.readthedocs.io
Table of Contents DAZ Studio 4.x - QuickStart Guide PDF - User Guide PDF Customizing Your Figures If you have read Chapter 2 of this User Guide you should be familiar with loading actors, adding clothing and props, as well as environments. Chapter 2 discussed briefly how to manipulate the content you've loaded in the Parameters pane. The discussion focused mostly on transforms. To make your scene unique you are going to want to customize the shape of your figures and objects using morphs. So what exactly is shaping, and what is a morph? In DAZ Studio, “Shaping” is the term used to describe the process of changing the shape of an object. Shaping is accomplished via morph targets. A “Morph Target” or “Morph” contains information for each of the vertices of a 3D object, and how they should move in relation to each other when that morph is applied. Fortunately for us, DAZ Studio takes care of the complicated calculations and presents morphs to the user as a series of sliders in the Shaping pane. All you have to do to shape your object is adjust a morph slider; it's just that simple. Note: In this chapter we will be using the Hollywood Blvd layout and the 'Actors, Wardrobe & Props' activity. By now it should be clear that the current scene selection is extremely important when performing any action in DAZ Studio. Shaping is no exception. The current scene selection will determine what properties appear in the various property related panes. When shaping you must have the object selected in the Scene pane. Morphs for each object are categorized by region. Selecting a morph “Region” (discussed below) will determine which morphs are displayed in the Shaping pane. In order to understand the Region Navigator Tool, and why it is necessary, a bit of background on regions is helpful. In DAZ Studio, morphs on figures are assigned to regions of the body based on what part of the body they affect. This makes finding morphs easier - if you want to find morphs that modify the arms of a figure, you simply look in the 'Arms' region. The same is true for 'Legs', 'Hands', 'Face' etc. Morphs that affect the entire figure are found under the 'Actor' region. The Region Navigator tool allows you to easily select different regions of a figure within the “Viewport.” To use the Region Navigator tool you must first activate it by clicking its icon in the toolbar. Once the tool is active you can select your figure within the viewport. You will see an outline around your figure. The outline indicates the currently selected region. By default DAZ Studio will automatically select the highest level region when you select a figure. You can then select one of the next lower levels by clicking again within the currently selected region. You can also select sibling regions, or regions at the same level as the currently selected region. If you hover your cursor over a region it will become highlighted and a tooltip will display the name of that region. You can also right click anywhere on a figure and choose a region from the list of regions that include the point you clicked. The DAZ Studio Shaping Video explains more. Note: The Region Navigator tool only selects regions for figures that have them. If a figure or object does not have regions then the Region Navigator tool will select “Nodes” instead. The Shaping pane is your hub for customizing the look of an object. The Shaping pane allows you to morph objects or figures that have morphs available. You could potentially change your figure from a small child to a fierce warrior, from a giant troll to a petite woman. The Shaping pane is where you can make the vision of your character and objects a reality. The Shaping pane is located on the left side of the DAZ Studio interface in the Hollywood Blvd layout. The pane is part of the 'Actors, Wardrobe & Props' activity. There are two pages in the Shaping pane - the Editor page and the Presets page. The Editor page of the Shaping pane is organized similarly to the Parameters pane - on the left hand side of the pane is a list of the regions for your selected figure or object if the selection has regions. If the selection does not have regions you will see property groups displayed instead. You can expand or collapse these regions/groups to reveal subregions/groups by clicking the arrow to the left of the region. Selecting a different region/group or subregion/group will change which properties are displayed. The right hand side of the pane is where you will find the actual shaping properties. These are the sliders that you can adjust to change the shape of your object. All morphs included in a selected region will be displayed on the right. You can dial up any combination of morphs to shape your object. To apply a morph, simply click the handle of the slider and drag it to the value you desire. You can also click the nudge icons on either end of the slider or enter a numeric value by clicking the value for the morph and entering the desired value into the field. Towards the top of the left hand side of the pane is a drop down menu that will allow you to change your current scene selection. This menu functions just like the Scene pane with the exception that it only lists items with geometry. Cameras and lights are filtered out of the “Scene Selection Menu” on the Shaping pane. This menu allows you to easily change your scene selection without having to leave the Shaping pane. Below the drop down menu are two useful filters - 'All' and 'Currently Used.' The 'All' filter allows you to see all shaping properties for the currently selected figure or object. The 'Currently Used' filter can be used to display shaping properties that have been changed from their default load state - in other words shaping properties that are currently in use. The Presets page is where you will load Shaping Presets for your figure. A Shaping Preset contains information for the morphs of a figure. Essentially, when you apply a Shaping Preset to your figure your figure will be morphed into whatever shape the preset dictates - assuming the figure has the morphs available. Shaping Presets provide a quick and easy way to shape your figure. You can switch to the Presets page by clicking the 'Presets' label at the top of the pane. The highlighted label indicates the current page. The Presets page functions like the Smart Content pane. Categories are listed on the left hand side, in the “Category View.” Shaping Presets in the selected category are displayed on the right, in the “Asset View.” To load a Shaping Preset all you need to do is double click the icon for the preset. DAZ Studio doesn't come with any free Shaping presets for Genesis or Genesis 2. However, both the Genesis figure and the Genesis 2 Female figure come with a few morphs for free. We encourage you to try out the morphs for both. Look at the different regions for each figure to get a feel for what morphs are going to be located where. Play around in the Editor page and adjust some sliders; see what you can come up with. Remember, you can purchase additional morphs for your figure from the DAZ 3D store. The Genesis Evolution: Morph Bundle for Genesis and the Genesis 2 Female(s) Morphs Bundle for Genesis 2 Female provide a wide range of morphs for adjusting the shape of the figure in various ways. Now that you've adjusted a few morphs you may want to save your shape for later. DAZ Studio allows you to do this with a Shaping Preset. A Shaping Preset contains information about the properties that adjust an object's shape. The save filter for this type of preset allows you to choose which properties are included when saving the preset. To save a Shaping Preset first make sure that the object you want to save the preset for is your current scene selection. Once the object is selected, open the 'File' menu and click on the 'Save As…' submenu. In the 'Save As…' submenu you will see the 'Shaping Preset…' action. Click on the action to save the Shaping Preset. The 'Shaping Preset…' action will launch the 'Filtered Save' dialog. This dialog allows you to name your Shaping Preset as well as choose the location that it is saved to. Pay attention to where you save the preset so you can find it later. When you have chosen a name and location for your file click the 'Save' button. Once you click 'Save' the 'Shaping Preset Save Options' dialog will appear. In this dialog you can customize your Shaping Preset. At the top of the dialog you are presented with the option to have your Shaping Preset include information for the current frame, or include information that is animated over several frames. The main part of the dialog is the “Properties View.” Here you will find your object listed. If you expand your object by clicking on the arrow to the left you will see all of the property groups for the object. You can further expand each group to see subgroups and individual properties. Everything that is selected will be included in the Shaping Preset. If you would like to exclude a property from the preset you can uncheck it. The 'File Options' section of the dialog allows you to choose whether or not to compress the file. By default this option is checked. Compressing the file will save space on your hard disk, and is generally a good choice. Advanced users may wish to keep the file uncompressed so that it can be edited in a text editor later. Once you are happy with your settings for the Shaping Preset click on 'Accept.' DAZ Studio will then save the preset and you will be able to access and load it later. The easiest way to access the preset later is through the Presets page of the Shaping pane. To load a Shaping Preset you must first select the object you want the preset to apply to. You can then load the preset by double clicking icon for the preset or dragging and dropping the icon on to your object. So your figure is sculpted and shaped like a Greek god, or close enough anyway. Now it is time to show your friends right? Well it's fairly simple to do with a quick render. We've done these before, and we'll do it again. Just like last time we will use the default render settings. All you need to do is align the camera using the camera controls so that your scene is framed how you want. Once you are ready hit Ctrl+R on a PC or Cmd+R on a Mac to start your render. Once it is finished name the render and save it. Congratulations on finishing Chapter 4 - Shaping. Coming this far is an impressive feat. However, your journey into 3D is just beginning and there is still lots to learn. In the next chapter we'll cover the process of posing your figure.
http://docs.daz3d.com/doku.php/public/software/dazstudio/4/userguide/chapters/shaping/start
2020-02-16T22:53:32
CC-MAIN-2020-10
1581875141430.58
[]
docs.daz3d.com
You are looking at documentation for an older release. Not what you want? See the current release documentation. An add-on could be a set of extensions, customizations, xml configurations, applications, templates or any new services packaged in a zip archive. In other words, an add-on could be whatever that extends eXo Platform capabilities by adding services, resources, and more. The easiest way to manage add-ons is to use the eXo Add-ons Manager that is shipped by default in all 4.3 editions. The eXo Add-ons Manager defines a standard approach of packaging, installing/uninstalling and updating the available add-ons. With the eXo Add-ons Manager, you, as administrators, can enhance your management of the add-ons installed on the eXo Platform instances via the Command Line Interface (known as CLI), in a simple manner. Basically, start with the launch scripts: $PLATFORM_HOME/addon (Windows, Linux / Mac OX) $PLATFORM_HOME/addon.bat (Windows) When running the addon script only, you can view different sets of commands, arguments and options. The global syntax is in the format addon [command] [arguments] [options], where: [command] is either of: list, install, uninstall, describe. [arguments] are ones specific to an add-on (Id and version). [options] are switch options that can be global or specific to the command (started with -- or -). Also, you could add the following useful options: --help / -h - Views all the needed information of the command line program. --verbose / -v - Prints the verbose log for debugging/diagnostic purpose. By walking through the following topics in this chapter, you will know how to manage add-ons in eXo Platform via the CLI:
https://docs-old.exoplatform.org/public/topic/PLF50/PLFAdminGuide.AddonsManagement.html
2020-02-16T23:21:46
CC-MAIN-2020-10
1581875141430.58
[]
docs-old.exoplatform.org
What happened ? An Update on Exchange Server 2010 SP1 Rollup Update 4 The. Kevin Allison General Manager Exchange Customer Experience
https://docs.microsoft.com/en-us/archive/blogs/jribeiro/what-happened-an-update-on-exchange-server-2010-sp1-rollup-update-4
2020-02-16T23:38:26
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
Client playback: a 30 second example¶. 2. Point your browser at the mitmdump instance.¶ I use a tiny Firefox addon called Toggle Proxy to switch quickly to and from mitmproxy. I’m assuming you’ve already configured your browser with mitmproxy’s SSL certificate authority. 3. Log in as usual.¶ And that’s it! You now have a serialized version of the login process in the file wireless-login, and you can replay it at any time like this: >>> mitmdump -c wireless-login Embellishments¶ to trim them anyway. So, we fire up the mitmproxy console tool on our serialized conversation, like so: >>> mitmproxy -r wireless-login We can now go through and manually delete (using the d keyboard shortcut) everything we want to trim. When we’re done, we use w to save the conversation back to the file.
https://mitmproxy.readthedocs.io/en/v0.17/tutorials/30second.html
2020-02-16T22:04:14
CC-MAIN-2020-10
1581875141430.58
[]
mitmproxy.readthedocs.io
Ubuntu installation¶ This document aims to demonstrate how to install Ubuntu operating system on user computer. Tip You can try Ubuntu Desktop or Server, last is recommended. Steps¶ - The user needs at least 4.5 GB of free space on their computer. - Connect your USB or DVD containing Ubuntu program. - When you turn on your computer the below image must show up automatically or by pressing F12. - Make sure you are connected to internet, then the below image is shown. Mark both options and click on “continue”. - Below shows how to Use the checkboxes to choose whether you’d like to Install Ubuntu alongside another operating system, delete your existing operating system and replace it with Ubuntu. in our case we select “Something Else” and click on “continue”. - In this stage, you will create partitions. - The last step is choosing your language and region. After doing so and restarting your computer you can start using Ubuntu.
https://ravada.readthedocs.io/en/latest/docs/Ubuntu_Installation.html
2020-02-16T21:50:24
CC-MAIN-2020-10
1581875141430.58
[]
ravada.readthedocs.io
You are looking at documentation for an older release. Not what you want? See the current release documentation. Tomcat and Jboss servers.
https://docs-old.exoplatform.org/public/topic/PLF50/PLFAdminGuide.Deployment.html
2020-02-16T23:23:28
CC-MAIN-2020-10
1581875141430.58
[]
docs-old.exoplatform.org
Select Committee on the Climate Crisis Select Committee on the Climate Crisis Thursday, August 1, 2019 (9:00 AM local time) Wittemyer Court Room, Wolf Law BuildingUniversity of Colorado Boulder, 2450 Kittredge Loop DriveBoulder, CO 80305 The Honorable Jared Polis Governor, Colorado The Honorable Suzanne Jones Mayor, Boulder, CO The Honorable Wade Troxell Mayor, Fort Collins, CO Mr. Cary Weiner State Energy Specialist, CSU Extension; and Director, Rural Energy Center, Colorado State University Mr. Chris Wright CEO, Liberty Oilfield Services Ms. Heidi VanGenderen Chief Sustainability Officer, University of Colorado-Boulder First Published: July 24, 2019 at 01:38 PM Last Updated: November 19, 2019 at 10:14 AM
https://docs.house.gov/Committee/Calendar/ByEvent.aspx?EventID=109874
2020-02-16T23:41:37
CC-MAIN-2020-10
1581875141430.58
[]
docs.house.gov
Manage the Search Center in SharePoint Server 2013 2016 2019 SharePoint Online In a Search Center site, users get the classic search experience. When you create an Enterprise Search Center site collection as described in Create a Search Center site in SharePoint Server, SharePoint Server creates a default search home page and a default search results page. In addition, several pages known as search verticals are). The following articles describe how to configure properties for each Web Part that is used in the Enterprise Search Center site: Feedback
https://docs.microsoft.com/en-us/SharePoint/search/manage-the-search-center-in-sharepoint-server?redirectedfrom=MSDN
2020-02-16T22:29:44
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
Developing Applications to Use Group Policy are we at the time when IT pro's will start talking to the dev team about security ? it's seems like now dev teams and IT pro's CAN and hopefully WILL work together. Applications can be developed to take advantage of the most common type of policy setting, namely registry-based policy. For example, a programmer can create a component that includes “available” and “unavailable” functionality based on registry-based policy. Administrators then have a well-defined and simple process: They can use the GPMC to turn functionality on or off by for all affected users and computers. This type of policy is implemented using a built in registry client-side extension on every Group Policy client to process the data and manage the appropriate registry keys. Registry-based policy settings are stored in one of four secure Group Policy keys, which cannot be modified without administrative rights on the machine. For more information, see the Implementing Registry-Based Group Policy article at.
https://docs.microsoft.com/en-us/archive/blogs/appsec/developing-applications-to-use-group-policy
2020-02-16T23:37:23
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
Compare two different csv files using PowerShell Today
https://docs.microsoft.com/en-us/archive/blogs/stefan_stranger/compare-two-different-csv-files-using-powershell
2020-02-16T23:52:03
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
Taking medication as prescribed is important for managing health conditions and general wellbeing. Plus, with financial incentives for providers, improving adherence is also beneficial for pharmacies. Amplicare's Low Adherence report identifies your patients who have low adherence (adherence scores between 60-79) to their maintenance cholesterol, oral diabetes, and/or hypertension medication. Here's how you can capitalize on these opportunities to help your patients and improve your store performance: First, head to Opportunities and select the Low Adherence report in the Worklist dropdown box. "Add filter" to segment the report so you can focus on your DIR Impact opportunities. Clicking into an opportunity will allow you to see the specific maintenance medications, their next refill dates, and ways to "Complete" the opportunity if it has been addressed: In the opportunity box you can print a handout notifying the patient of their low adherence: Include a MedSync enrollment form in the handout to streamline the process for obtaining consent for the program. Other tips: - Be sure to utilize Amplicare Assist to address these opportunities right in your daily workflow. - Use Amplicare Connect to reach out to all of your low adherent patients with a call/text campaign. What's Next? If you're curious on how we calculate adherence, check out this article!
https://docs.amplicare.com/en/articles/1137413-low-adherence-worklist
2021-01-15T21:39:42
CC-MAIN-2021-04
1610703496947.2
[]
docs.amplicare.com
🏷 Project Template¶ The Data Partnership Project Template creates a project structure inspired by the Cookiecutter Data Science with an out-of-box Jupyter Book published automatically on GitHub Pages. Here are some of the practices that project template aims to encourage: Reproducibility Transparency Credibility Additional Resources¶ Development Data Partnership A partnership between international organizations and companies, created to facilitate the use of third-party data in research and international development. A curated list of projects, data goods and derivative works associated with the Development Data Partnership The DIME Wiki is a public good developed and maintained by DIME Analytics, a team which creates tools that improve the quality of impact evaluation research at DIME. The DIME Wiki. The DIME Analytics Data Handbook This book is intended to serve as an introduction to the primary tasks required in development research, from experimental design to data collection to data analysis to publication. It serves as a companion to the DIME Wiki and is produced by DIME Analytics. GitHub Pages are public webpages hosted and easily published through GitHub. Jupyter Book is an open source project for building beautiful, publication-quality books and documents from computational material.
https://docs.datapartnership.org/pages/project_template.html
2021-01-15T21:16:12
CC-MAIN-2021-04
1610703496947.2
[]
docs.datapartnership.org
Use AWS IAM user credentials Console.. Navigate to the “Policies” section in the left navigation bar and click the “Get Started” button. Click the “Create Policy” button and then select the “Create Your Own Policy” option. Set the name for the policy to “BitnamiCloudHosting” and add the policy document shown below, replacing the ACCOUNT_ID placeholder with your Amazon Account ID. { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iam:*", "Resource": "arn:aws:iam::ACCOUNT_ID:user/bitnami-hosting-operators/*" }, { "Effect": "Allow", "Action": ["sts:GetFederationToken", "ec2:*", "cloudwatch:GetMetricStatistics", "cloudformation:*"], "Resource": "*" } ] } The message “BitnamiCloudHosting has been created. Now you are ready to attach your policy to users, groups, and roles.” will be displayed if the policy was created successfully. Navigate to the “Users” sectioniCloudHosting”. The “Access Key ID” and “Secret Access Key” can now be used to create the cloud account in the Bitnami Cloud Hosting dashboard.
https://docs.bitnami.com/bch/faq/administration/use-iam/
2021-01-15T22:07:39
CC-MAIN-2021-04
1610703496947.2
[]
docs.bitnami.com
Understanding the Apache Component Version At Cloudera, products are versioned with 4 digits plus a build number (for example, HDF 3.5.2.0-99). Apache components shipped within the products are versioned with the product version prefixed by the 3 digits of the Apache version (for example, in HDF 3.5.2.0-99, we have NiFi 1.12.1.3.5.2.0-99). When we start building a release of a product, we use, as the base version, the Apache component version which exists at that time. However, it does not mean that the component version we ship is equal to the corresponding Apache version. Between the moment we start building a release, and the moment the release gets out, we add a lot of improvements, features and fixes. This is why it is important to distinguish the Apache version from the Cloudera version. For example, the NiFi version in HDF 3.5.2.0 is NiFi 1.12.1.3.5.2.0-<build-number>. It means that when we started building the HDF 3.5.2.0 release, we used Apache NiFi 1.12.1 as the base version. However, it does not mean that the NiFi version we ship is equal to Apache NiFi 1.12.1. In fact, we added a lot of improvements on top of it. At the end, in this release, NiFi 1.12.1.3.5.2.0-<build-number> is actually including more things than what you could find in Apache NiFi 1.12.1.
https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.5.2/release-notes/content/understanding-component-version.html
2021-01-15T21:44:13
CC-MAIN-2021-04
1610703496947.2
[]
docs.cloudera.com
Managing MiNiFi Apart from working with dataflows, you can also perform some management tasks using MiNiFi. Monitoring status using MiNiFiYou can use the minifi.sh flowStatus option to monitor a range of aspects of your MiNiFi operational and dataflow status. Loading a new dataflow for MiNiFiYou can load a new dataflow for a MiNiFi instance to run.Stopping MiNiFiYou can stop MiNiFi at any time.Parent topic: MiNiFi agent quick start
https://docs.cloudera.com/cem/1.2.2/minifi-agent-quick-start/topics/cem-managing-minifi.html
2021-01-15T21:39:23
CC-MAIN-2021-04
1610703496947.2
[]
docs.cloudera.com
Kong for Kubernetes Enterprise is an enhanced version of the Open-Source Ingress Controller. It includes all Enterprise plugins and comes with 24x7 support for worry-free production deployment. This is available to enterprise customers of Kong, Inc. only. Prerequisites Before we can deploy Kong, we need to satisfy two prerequisites: In order to create these secrets, let’s Again, please take a note of the namespace kong. Installers Once the secrets/deploy/manifests/enterprise-k8s 2 $ helm install kong/kong \ --name demo --namespace kong \ --values # Helm 3 $ helm install kong/kong --generate-name --namespace kong \ --values \ --set ingressController.installCRDs=false.
https://docs.konghq.com/kubernetes-ingress-controller/1.0.x/deployment/k4k8s-enterprise/
2021-01-15T21:09:37
CC-MAIN-2021-04
1610703496947.2
[]
docs.konghq.com
About Command Precedence Short description Describes how PowerShell determines which command to run. Long description Command precedence describes how PowerShell determines which command to run when a session contains more than one command with the same name. Commands within a session can be hidden or replaced by commands with the same name. This article shows you how to run hidden commands and how to avoid command-name conflicts. Command precedence When a PowerShell session includes more than one command that has the same name, PowerShell determines which command to run by using the following rules. If you specify the path to a command, PowerShell runs the command at the location specified by the path. For example, the following command runs the FindDocs.ps1 script in the "C:\TechDocs" directory: C:\TechDocs\FindDocs.ps1 As a security feature, PowerShell does not run executable (native) commands, including Using wildcards in execution You may use wildcards in command execution. Using wildcard characters is also known as globbing. PowerShell executes a file that has a wildcard match, before a literal match. For example, consider a directory with the following files: Get-ChildItem C:\temp\test Directory: C:\temp\test Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 5/20/2019 2:29 PM 28 a.ps1 -a---- 5/20/2019 2:29 PM 28 [a1].ps1 Both script files have the same content: $MyInvocation.MyCommand.Path. This command displays the name of the script that is invoked. When you run [a1].ps1, the file a.ps1 is executed even though the file [a1].ps1 is a literal match. C:\temp\test\[a1].ps1 C:\temp\test\a.ps1 Now let's delete the a.ps1 file and attempt to run it again. Remove-Item C:\temp\test\a.ps1 C:\temp\test\[a1].ps1 C:\temp\test\[a1].ps1 You can see from the output that [a1].ps1 runs this time because the literal match is the only file match for that wildcard pattern. For more information about how PowerShell uses wildcards, see about_Wildcards. Note To limit the search to a relative path, you must prefix the script name with the .\ path. This limits the search for commands to files in that relative path. Without this prefix, other PowerShell syntax may conflict and there are few guarantees that the file will be found. If you do not specify a path, PowerShell uses the following precedence order when it runs commands for all items loaded in the current session: - Alias - Function - Cmdlet - External executable files (programs and non-PowerShell scripts) Therefore, if you type "help", PowerShell first looks for an alias named help, then a function named Help, and finally a cmdlet named Help. It runs the first help item that it finds. For example, if your session contains a cmdlet and a function, both named Get-Map, when you type Get-Map, PowerShell runs the function. Note This only applies to loaded commands. If there is a build executable and an Alias build for a function with the name of Invoke-Build inside a module that is not loaded into the current session, PowerShell runs the build executable instead. It does not auto-load modules if it finds the external executable in this case. It is only when no external executable is found that an alias, function, or cmdlet with the given name is invoked, thereby triggering auto-loading of its module. When the session contains items of the same type that have the same name, PowerShell runs the newer item. For example, if you import another Get-Date cmdlet from a module, when you type Get-Date, PowerShell runs the imported version over the native one.. Finding hidden commands The All parameter of the Get-Command cmdlet gets all commands with the specified name, even if they are hidden or replaced. Beginning in PowerShell 3.0, by default, Get-Command gets only the commands that run when you type the command name. In the following examples, the session includes a Get-Date function and a Get-Date cmdlet. The following command gets the Get-Date command that runs when you type Get-Date. Get-Command Get-Date CommandType Name ModuleName ----------- ---- ---------- Function Get-Date The following command uses the All parameter to get all Get-Date commands. Get-Command Get-Date -All CommandType Name ModuleName ----------- ---- ---------- Function Get-Date Cmdlet Get-Date Microsoft.PowerShell.Utility Running hidden commands You can run particular commands by specifying item properties that distinguish the command from other commands that might have the same name. You can use this method to run any command, but it is especially useful for running hidden commands. Qualified names Using the module-qualified name of a cmdlet allows you to run commands hidden by an item with the same name. For example, you can run the Get-Date cmdlet by qualifying it with its module name Microsoft.PowerShell.Utility. Use this preferred method when writing scripts that you intend to distribute. You cannot predict which commands might be present in the session in which the script runs. New-Alias -Name "Get-Date" -Value "Get-ChildItem" Microsoft.PowerShell.Utility\Get-Date Tuesday, September 4, 2018 8:17:25 AM To run a New-Map command that was added by the MapFunctions module, use its module-qualified name: MapFunctions\New-Map To find the module from which a command was imported, use the ModuleName property of commands. (Get-Command <command-name>).ModuleName For example, to find the source of the Get-Date cmdlet, type: (Get-Command Get-Date).ModuleName Microsoft.PowerShell.Utility Note You cannot qualify variables or aliases. Call operator You can also use the Call operator & to run hidden commands by combining it with a call to Get-ChildItem (the alias is "dir"), Get-Command or Get-Module. The call operator executes strings and script blocks in a child scope. For more information, see about_Operators. For example, if you have a function named Map that is hidden by an alias named Map, use the following command to run the function. &(Get-Command -Name Map -CommandType -CommandType function) &($myMap) Replaced items A "replaced" item is one that you can no longer access. You can replace items by importing items of the same name from a module or snap-in. For example, if you type a Get-Map function in your session, and you import a function called Get-Map, it replaces the original function. You cannot retrieve it in the current session. Variables and aliases cannot be hidden because you cannot use a call operator or a qualified name to run them. When you import variables and aliases from a module or snap-in, they replace variables in the session with the same name. Avoiding name conflicts The best way to manage command name conflicts is to prevent them. When you name your commands, use a unique name. For example, add your initials or company name acronym to the nouns in your commands. Also, when you import commands into your session from a PowerShell when you import the DateFunctions module. Import-Module -Name DateFunctions -Prefix ZZ For more information, see Import-Module and Import-PSSession below.
https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_command_precedence?view=powershell-7.1
2021-01-15T21:13:57
CC-MAIN-2021-04
1610703496947.2
[]
docs.microsoft.com
Overview Overview The SentryOne Plan Explorer Extension for Azure Data Studio (ADS) is a FREE extension that provides you with enhanced query plan diagrams for batches run in ADS. Optimized layout algorithms and intuitive color-coding help you quickly identify operators in your execution plans that are slowing database performance. With the SentryOne Plan Explorer ADS extension, you can: - View operator costs (per node) by CPU or I/O - Control the graphical plan display with plan diagram context menus - Analyze runtime metrics such as duration and CPU - Toggle between actual and estimated plans - Save ShowPlan XML to send to other users, or open in Plan Explorer Additional Information: See Aaron Bertrand's blog post, SentryOne Plan Explorer Extension for Azure Data Studio, for release notes and details about the extension, including some known issues and how to provide feedback. Requirements System Requirements The following is required to install the SentryOne Plan Explorer ADS Extension: - Azure Data Studio 1.9.0 or higher - .NET Core Runtime 2.1 or higher Recommended System Configuration - 2 or more cores - 8 GB or more of memory Compatible Operating Systems - Windows (8.1 / Server 2012 R2 or higher) - macOS (10.13 or higher) - RedHat (7.6 or higher) - SuSE (12.0 or higher) - Ubuntu (18.04 or higher) Installation Installing the Plan Explorer Extension Important: You must create a free SentryOne Cloud account and agree to the End User License Agreement to access the SentryOne Plan Explorer ADS Extension. Once you've created your account, complete the following steps to Install the Plan Explorer ADS Extension: - Download the Plan Explorer Azure Data Studio Extension from extensions.sentryone.com. - Open Azure Data Studio, and then select File > Install Extension from VSIX Package. - Select the Plan Explorer Azure Data Studio Extension VSIX file, and then select Install. - Select Yes on the warning prompt to install the Plan Explorer Extension. - The installation completes. Success: The Plan Explorer Azure Data Studio Extension has installed successfully! Important: After you have downloaded, and successfully installed the Plan Explorer ADS Extension, the Extension will be enabled in Azure Data Studio by default. Enabling the Extension Enabling the Plan Explorer Extension Important: After you have downloaded, and successfully installed the Plan Explorer ADS Extension, the Extension will be enabled in Azure Data Studio by default. You can also enable or disable the Plan Explorer ADS Extension for any desired plan by doing one of the following: - Selecting SentryOne Plan Explorer in the status bar - Open the Command Palette by selecting CTRL+Shift+P and then selecting Toggle SentryOne Plan ExplorerCTRL+Shift+P and select Toggle SentryOne Plan Explorer - Entering the Keyboard Command Shortcut CTRL+Shift+F5 Note: After enabling the SentryOne Plan Explorer Extension, the Plan Explorer On notification displays in Azure Data Studio. Using the Extension Using the SentryOne Plan Explorer Extension Use the SentryOne Plan Explorer extension to see a detailed query plan for your desired query. Complete the following steps to view an Actual Plan or an Estimated plan: Viewing an Actual Plan - Enable the SentryOne Plan Explorer Extension for ADS. - Enter a new, or load an existing query, and then select Run to collect an Actual Plan and open a Plan Explorer Statement tab. 3. Select View Plan to open the Actual Plan for the selected query statement. Viewing an Estimated Plan - Enable the SentryOne Plan Explorer Extension for ADS. - Enter a new, or load an existing query, and then select Explain to collect an Estimated Plan and open a Plan Explorer Statement tab. 3. Select View Plan to open the Estimated Plan for the selected query statement. Plan Explorer Statement Tab The Plan Explorer Statements tab opens automatically when a query is run and the Plan Explorer extension is enabled. The Statements tab separates the query into statements, and provides Duration and CPU information about those statements. Plan Explorer Query Plan Diagram The Plan Diagram uses an optimized layout algorithm that renders plans in a much more condensed view, so more of the plan fits on the screen without having to zoom out. You can zoom in and out by selecting CTRL + Mouse Wheel. Optimized plan node labels prevent truncation of object names in most cases. The estimated cost of the operation is displayed above each node for maximum readability. These cost labels use color scaling by CPU, IO, or CPU+IO so highest cost operations are instantly obvious, even on larger plans. CPU + IO is used by default; change this through the Costs By context menu. All costs in the Plan Diagram are shown to the first decimal place. Through the context menu of the Plan Diagram, choose to show cumulative costs in lieu of per node costs; when combined with color scaling, this feature makes it easy to see which subtrees are contributing most to the plan cost. Hover over a Plan Diagram operator to display a truncated tooltip that provides details about the operator. Select the operator to display the operator's detail window. Note: Return to the Plan Explorer Statements tab by selecting the button. Note: The SentryOne Plan Explorer ADS Extension is compatible with any theme in ADS. Plan Diagram Context Menu The following context menu options are available: Showplan XML Select View XML to open the ShowPlan XML tab for the selected statement. You can copy and save the Showplan XML, and then open it in Plan Explorer. Missing Index Details The Missing Index details option is selectable from the Plan Explorer context menu if you query has any missing indexes. Open the Missing Index details for your query in an untitled document window by selecting Missing Index details. Api Port Settings The default API port for the SentryOne Plan Explorer ADS Extension has been updated to 5042 to avoid potential conflicts. To change the API port, complete the following steps: 1. Select and then select Settings. 2. Select Extensions, and then select SentryOne Plan Explorer. 3. Enter your new Port Value. Note: Any changes you make to the Plan Explorer Api Port are saved automatically.
https://docs.sentryone.com/help/plan-explorer-azure-data-studio-extension
2021-01-15T21:22:29
CC-MAIN-2021-04
1610703496947.2
[array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d4db6c86e121ca50263f5f2/n/s1-plan-explorer-ads-extension-overview-image-096.png', 'Plan Explorer Azure Data Studio Extension Version 0.9.6 Plan Explorer Azure Data Studio Extension'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d4db98b8e121c890dfb17d9/n/s1-plan-explorer-ads-extension-plan-explorer-on-096.png', 'Plan Explorer ADS Extension Plan Explorer On Version 0.9.6 Plan Explorer ADS Extension Plan Explorer On'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d4dc08d6e121c6e0c63f500/n/s1-plan-explorer-ads-extension-plan-explorer-statements-tab-096.png', 'Plan Explorer ADS Extension Plan Explorer Statements tab Version 0.9.6 Plan Explorer ADS Extension Plan Explorer Statements tab'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d4dc6458e121cfb17fb15f8/n/s1-plan-explorer-ads-extension-plan-diagram-tooltip-096.png', 'Plan Explorer ADS Extension Plan Diagram tooltips Version 0.9.6 Plan Explorer ADS Extension Plan Diagram tooltips'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d4dc654ec161cd70823e2e4/n/s1-plan-explorer-ads-extension-plan-diagram-expanded-tooltip-096.png', 'Plan Explorer ADS Extension Plan Diagram Operator detail window Version 0.9.6 Plan Explorer ADS Extension Plan Diagram Operator detail window'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d41d8656e121c346a9a3ad2/n/s1-plan-explorer-ads-context-menu-missing-indexes-096.png', 'Plan Explorer ADS Extension Plan Diagram Context menu Version 0.9.6 Plan Explorer ADS Extension Plan Diagram Context menu'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d4dd361ad121cbe28306d3a/n/s1-plan-explorer-ads-extension-open-settings-096.png', 'Plan Explorer ADS Extension Open Settings'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d4dd1a4ec161cca1123e2e1/n/s1-plan-explorer-ads-extension-plan-explorer-settings-096.png', 'Plan Explorer ADS Extension SentryOne Plan Explorer Settings Version 0.9.6 Plan Explorer ADS Extension SentryOne Plan Explorer Settings'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5d4dd1b9ec161c0a0923e37b/n/s1-plan-explorer-ads-extension-plan-explorer-api-port-096.png', 'Plan Explorer ADS Extension Api Port Version 0.9.6 Plan Explorer ADS Extension Api Port'], dtype=object) ]
docs.sentryone.com
Indices and tables¶ Introduction¶ Enhanced Enum is a library that gives C++ enums capabilities that they don’t normally have: enum class StatusLabel { INITIALIZING, WAITING_FOR_INPUT, BUSY, }; constexpr auto status = Statuses::INITIALIZING; Their value is no longer restricted to integers: static_assert( status.value() == "initializing" ); static_assert( status == Status::from("initializing") ); They can be iterated: std::cout << "Listing " << Statuses::size() << " enumerators:\n"; for (const auto status : Statuses::all()) { std::cout << status.value() << "\n"; } …all while taking remaining largely compatible with the fundamental enums: static_assert( sizeof(status) == sizeof(StatusLabel) ); static_assert( status == StatusLabel::INITIALIZING ); static_assert( status != StatusLabel::WAITING_FOR_INPUT ); Why yet another enum library for C++?¶ There are plethora of options available for application writers that want similar capabilities than this library provides. Why write another instead of picking one of them? Short answer: Because it solved a problem for me, and I hope it will solve similar problems for other people Longer answer: There is a fundamental limitations to the capabilities of native enums within the standard C++, and in order to cope with them, enum library writers must choose from more or less unsatisfactory options: Resort to compiler implementation details. While this is a non-intrusive way to introduce reflection, it’s not what I’m after. Use macros. By far the most common approach across the ecosystem is to use preprocessor macros to generate the type definitions. To me macros are just another form of code generation. The advantage is that this approach needs standard C++ compiler only. The drawback is the inflexibility of macro expansions. Enhanced Enum utilizes a proper code generator to create the necessary boilerplate for enum types. The generator is written in Python, and unlocks all the power and nice syntax that Python provides. The generated code is clean and IDE friendly. This approach enables the enums created using the library to have arbitrary values, not just strings derived from the enumerator names. The drawback is the need to include another library in the build toolchain. Getting started¶ The C++ library is header only. Just copy the contents of the cxx/include/ directory in the repository to your include path. If you prefer, you can create and install the CMake targets for the library: $ cmake /path/to/repository $ make && make install In your project: find_package(EnhancedEnum) target_link_libraries(my-target EnhancedEnum::EnhancedEnum) The enum definitions are created with the EnumECG library written in Python. It can be installed using pip: $ pip install EnumECG The library and code generation API are documented in the user guide hosted at Read the Docs. The author of the library is Jaakko Moisio. For feedback and suggestions, please contact [email protected]. User guide¶ Enhanced Enum is a header-only C++ library used to implement the enhanced enumeration types. EnumECG is a Python library is used to generate the enhanced enum definitions for the C++ application.
https://enhanced-enum.readthedocs.io/en/latest/
2021-01-15T20:45:36
CC-MAIN-2021-04
1610703496947.2
[]
enhanced-enum.readthedocs.io
One of the first things that needs to be done before U-Boot or uClinux can be used, is to setup a terminal program to communicate with the target device. Communication between a host computer and the UART(s) on the Blackfin device is achieved through the use of a terminal program. A serial cable is connected between the host computer and the development board, data is then transferred between the host computer and the development board via a terminal program. The STAMP board has a DB9 serial connector for this purpose, connectors on other development hardware may vary. For more information consult the documentation for your particular development hardware. To get started with U-Boot and uClinux a terminal program and a serial connection to the development board are required as U-Boot and uClinux both use this serial link for standard input and output. There are several terminal programs available for a number of platforms. Two common terminal programs, one for Windows and one for Linux, and the methods used to configure them for use with a STAMP board running U-Boot / uClinux are described below. Before preforming the procedures described below a serial cable should be connected between the host computer and the target system. The two main GNU/Linux communication programs are kermit and minicom. Before you can run either, (or any other serial program in Linux) you must ensure that you have read/write access to the serial port. Do you this look at /dev/ttyS0: rgetz@home:~> ls -l /dev/ttyS0 crw-rw---- 1 root uucp 4, 64 2005-03-19 17:01 /dev/ttyS0 You can see here that only the user root and the members of the group uucp have read/write access. To determine if you are in the group uucp check with the groups command: rgetz@home:~> groups users dialout video The root user must add you to the uucp group either by editing the /etc/group file, or by using the distributions graphical interface. Before the new group will take affect, you must log out, and log back in. Kermit is very easy to set up. Edit or create a ~/.kermrc file, to look something like this: set line /dev/ttyS0 define sz !sz \%0 > /dev/ttyS0 < /dev/ttyS0 set speed 57600 set carrier-watch off set prefixing all set parity none set stop-bits 1 set modem none set file type bin set file name lit set flow-control none set prompt "Linux Kermit> " /dev/ttyS0in the above file to whatever serial device you are using. It could be any of: /dev/ttyS0, /dev/ttyS1, /dev/ttyS2, … , /dev/ttyUSB0, /dev/ttyUSB1,… Then you can just evoke kermit by: rgetz@home:~> kermit C-Kermit 8.0.211, 10 Apr 2004, for Linux Copyright (C) 1985, 2004, Trustees of Columbia University in the City of New York. Type ? or HELP for help. Linux Kermit> To connect to the target, type connect: Linux Kermit>connect Connecting to /dev/ttyS0, speed 57600 Escape character: Ctrl-\ (ASCII 28, FS): enabled Type the escape character followed by C to get back, or followed by ? to see other options. ---------------------------------------------------- To send a file, just escape back to the kermit prompt and either use the send command (send kermit protocol), or sz (send zmodem protocol). This terminal program is available for the Linux platform. You may have to install this program if it is not included with your particular distribution of Linux. The first time Minicom is run you will have to initialize the settings. To do this preform the following steps: As root enter the following command: bash# minicom -s The Minicom Setup screen will now appear: Down arrow to Serial port setup and hit enter. The Serial port setup window should now appear: In this menu type the letter of the option you want to choose (e.g. 'A' would be Serial Device) and then edit the configuration for that option. The following settings should be entered: Serial Device: <choose the device the serial cable is connected to> (usually /dev/ttyS0) Lockfile Location: <blank> (to prevent the serial port from ever being locked) Callin : <blank> Callout : <blank> Bps/Par/Bits: 57600 8N1 (baud rate (57600), parity (N for none), stop bits (1)) Hardware Flow Control: No Software Flow Control: No Hit Esc to return to the main menu. Next the modem features must be disabled. This needs to be done because Minicom is normally used for modem communication and the default settings will be looking to establish communication through a modem. Back at the Minicom setup screen: Down arrow to Modem and Dialing and hit enter. The modem and dialing setup screen should now appear: In this menu hit the letter of the option you want to choose (e.g. 'A' would be Init string) and then edit the configuration for that option. The following settings should be entered: Init string: <blank> Reset string: <blank> Dialing prefix #1: <blank> Dialing suffix #1: <blank> Dialing prefix #2: <blank> Dialing suffix #2: <blank> Dialing prefix #3: <blank> Dialing suffix #3: <blank> Connect string: <blank> Hit Esc to return to the main menu Now that the configuration has been set it should be saved as the default configuration so that every time Minicom starts these settings will be restored. You can connect to the target system with telnet. telnet 192.168.1.66 Trying 192.168.1.66... Connected to 192.168.1.66. Escape character is '^]'. BusyBox v1.00 (2005.09.05-02:12+0000) Built-in shell (msh) Enter 'help' for a list of built-in commands. root:~> cat /etc/motd Welcome to: ____ _ _ / __| ||_| _ _ _ _| | | | _ ____ _ _ \ \/ / | | | | | | || | _ \| | | | \ / | |_| | |__| || | | | | |_| | / \ | ___\____|_||_|_| |_|\____|/_/\_\ |_| For further information see: root:~> For telnet to work you must have the telnet user package enabled. For a quick test use grep to look for TELNET in the user config file (config/.config) grep TELNET config/.config CONFIG_USER_TELNETD_TELNETD=y # CONFIG_USER_TELNETD_DOES_NOT_USE_OPENPTY is not set CONFIG_USER_TELNET_TELNET=y # CONFIG_USER_BUSYBOX_TELNET is not set # CONFIG_USER_BUSYBOX_TELNETD is not set Once this is confirmed also confirm that romfs/bin/telnetd has also been created. # on the development system ls -l romfs/bin/telnetd -rwxr--r-- 1 root root 37856 Sep 12 15:13 romfs/bin/telnetd This means that the file should be in the target image too. # on the target system root:~> ls -l /bin/telnetd -rwxr--r-- 1 0 0 37856 /bin/telnetd Then check that the telnet daemon is enabled on the target in the **inetd** config file. # On the target root:~> cat /etc/inetd.conf ftp stream tcp nowait root /bin/ftpd -l telnet stream tcp nowait root /bin/telnetd If all this works then you should be able to start a telnet session on the target as shown above. You can connect to the target system with rsh. rsh allows you to execute a single command on the target from a remote machine. The output generated from the command issued is only visible on the remote machine that it was issued from. rsh must first be built into your kernel. To do this follow these steps: In the uClinux-dist/ directory issue the following commands: # on the development system make clean make menuconfig Next a dialog box will appear. Here choose the box that says: Kernel/Library/Defaults Selection The next dialog box will appear. Here select: [*] Customize Vendor/User Settings Exit Exit Do you wish to save your new kernel configuration? Yes A new menu will appear. In this menu select: Blackfin app programs ---> In the next dialog box select: --- Inetutils [*] rsh [*] rcp [*] rshd Then: Exit Exit Do you wish to save your new kernel configuration? Yes Now compile the kernel: # on the development system make After you compile the kernel you can now load and run it (for more information on loading and running the kernel please see Downloading the Kernel to the Target) and begin to use rsh. After booting the kernel enter the following commands to start rshd and dhcpcd: # on the target Welcome to: ____ _ _ / __| ||_| _ _ _ _| | | | _ ____ _ _ \ \/ / | | | | | | || | _ \| | | | \ / | |_| | |__| || | | | | |_| | / \ | ___\____|_||_|_| |_|\____|/_/\_\ |_| For further information see: BusyBox v1.00 (2005.09.16-12:31+0000) Built-in shell (msh) Enter 'help' for a list of built-in commands. root:~> dhcpcd& 26 root:~> eth0: link down eth0: link up, 100Mbps, half-duplex, lpa 0x40A1 root:~> rshd & 28 root:~> Now issue an ifconfig to determine the IP address of the target. # on the target root:~> ifconfig eth0 Link encap:Ethernet HWaddr 00:E0:22:FE:06:19 inet addr:10.64.204.163 Bcast:10.64.204.255 Mask:255.255.255.0 UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:1 RX packets:95 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 Interrupt:27 Base address:0x300 DMA chan:ff root:~> With the targets IP now known, we can access it using rsh on our host machine. To test this issue the following command from the host machine: # on the development system user@linux:~> rsh -l root 10.64.204.163 ls This command will issue the ls command on the target board and the results will be printed to the console of the host system. # on the development system user@linux:~> rsh -l root 10.64.204.163 ls bin dev etc home lib lost+found mnt proc root sbin tmp usr var user@linux:~> You can also preform a remote file copy, rcp, from your host to your target when rshd is running. To do this issue the following command from the host computer: # on the development system user@linux:~> rcp foo [email protected]:/ Where foo is the file to be copied, 10.64.204.163 is the target's IP address, and / is the directory where the file is to be copied to. More infos: Remote Shells This terminal program is an established alternative to hyperterminal, and also easier to setup, because you are not confused with setup dialogs related to PSTN-like connections. Two versions exists. V2.3 is under 1M and supports serial and TCP/IP connections with telnet (even colorful prompts are supported). Download URL: The more recent version V4.67 is 10 times as big, but that includes a bunch of add-ons, including SSH-2 support. Download URL: Upon first start, a dialog occurs where you must select between serial and TCP/IP. Subsequent starts do without this dialog. Serial settings must be set up in the Setup⇒Serial port… dialog. You may also wish to set up a big screen buffer: Setup⇒Window… (item Scroll buffer), or set up the default window size: Setup⇒Terminal… Upon each start-up, the terminal is active. If the set-up is done properly you should be able to see output from U‑Boot or uClinux when they are run. This terminal program is available for most Windows platforms. It usually comes pre‑installed and can be found under: Start>Programs>Accessories>Communication>HyperTerminal. When a Hyper Terminal session is started the Connection Description window will appear (if the Connection Wizard dialog appears simply complete it with dummy values as we will not be using a dial-up modem): Name: <type any appropriate name for the session>. Click OK. The Connect To window should now appear: Connect using: <choose the port the serial cable is connected to> (usually COM1). Click OK. The COMX Properties window should now appear: Bits per second: 57600 Data bits: 8 Parity: None Stop bits: 1 Flow Control: None Click OK. The terminal session should now be connected. You should now be able to see output from U‑Boot or uClinux when they are run. As a Windows counterpart for rcp/rsh, the puTTY package fits well. WinSCP uses puTTY resources and provides a dual pane file manager where you can even preset a directory of your local host machine for the left pane and a preset directory for the connected target. Does not work with sftp protocol, you should select SCP or allow “fallback to SCP” in the appropriate connect dialog. Home page: Note: Linux two pane file managers tend to fail in building up connections to blackfin targets, even when fish or scp (ssh-1) is supported. Complete Table of Contents/Topics
https://docs.blackfin.uclinux.org/doku.php?id=terminal_programs
2017-09-19T15:04:02
CC-MAIN-2017-39
1505818685850.32
[]
docs.blackfin.uclinux.org
After reading this guide, you’ll know: Securing a web application is all about understanding security domains and understanding the attack surface between these domains. In a Meteor app, things are pretty simple: In practice, this means that you should do most of your security and validation on the boundary between these two domains. In simple terms: Since Meteor apps are often written in a style that puts client and server code together, it’s extra important to be aware what is running on the client, what is running on the server, and what the boundaries are. Here’s a complete list of places security checks need to be done in a Meteor app: Each of these points will have their own section below.. It’s much easier to write clean code if you can assume your inputs are correct, so it’s valuable to validate all Method arguments before running any actual business logic. You don’t want someone to pass a data type you aren’t expecting and cause unexpected behavior.. Read more about how to use it in the Methods article. The rest of the code samples in this article will assume that you are using this package. If you aren’t, you can still apply the same principles but the code will look a little different. if (Meteor.isServer) {. Publications are the primary way a Meteor server can make data available to a client. While with Methods the primary concern was making sure users can’t modify the database in unexpected ways, with publications the main issue is filtering the data being returned so that a malicious user can’t get access to data they aren’t supposed to see. In a server-side-rendered framework like Ruby on Rails, it’s sufficient to simply not display sensitive data in the returned HTML response. In Meteor, since the rendering is done on the client, an if statement in your HTML template is not secure; you need to do security at the data level to make sure that data is never sent in the first place. All of the points above about Methods apply to publications as well: checkor aldeed:simple-schema. }); }); The data returned from publications will often be dependent on the currently logged in user, and perhaps some properties about that user - whether they are an admin, whether they own a certain document, etc.. For certain applications, for example pagination, you’ll want to pass options into the publication to control things like how many documents should be sent to the client. There are some extra considerations to keep in mind for this particular case. limitoption of the query from the client, make sure to set a maximum limit. Otherwise, a malicious client could request too many documents at once, which could raise performance issues. $andto. In summary, you should make sure that any options passed from the client to a publication can only restrict the data being requested, rather than extending it. Publications are not the only place the client gets data from the server. The set of source code files and static assets that are served by your application server could also potentially contain sensitive data:, the next section will talk about how to handle them. Every app will have some secret API keys or passwords: These should never be stored as part of your app’s source code in version control, because developers might copy code around to unexpected places and forget that it contains secret keys. You can keep your keys separately in Dropbox, LastPass, or another service, and then reference them when you need to deploy the app.": { "appId": "12345", "secret": "1234567" } } In your app’s JavaScript code, these settings can be accessed from the variable Meteor.settings. Read more about managing keys and settings in the Deployment article.: { appId: Meteor.settings.facebook.appId, loginStyle: "popup", secret: Meteor.settings.facebook.secret } }); Now, accounts-facebook will be able to find that API key and Facebook login will work properly. This is a very short section, but it deserves its own place in the table of contents. Every production Meteor app that handles user data should run with SSL. Yes, Meteor does hash your password or login token on the client before sending it over the wire, but that only prevents an attacker from figuring out your password - it doesn’t prevent them from logging in as you, since they could just send the hashed password to the server to log in! No matter how you slice it, logging in requires the client to send sensitive data to the server, and the only way to secure that transfer is by using SSL. Note that the same issue is present when using cookies for authentication in a normal HTTP web application, so any app that needs to reliably identify users should be running on SSL. Generally speaking, all production HTTP requests should go over HTTPS, and all WebSocket data should be sent over WSS. It’s best to handle the redirection from HTTP to HTTPS on the platform which handles the SSL certificates and termination. In the event that a platform does not offer the ability to configure this, the force-ssl package can be added to the project and Meteor will attempt to intelligently redirect based on the presence of the x-forwarded-for header. This is a collection of points to check about your app that might catch common errors. However, it’s not an exhaustive list yet—if we missed something, please let us know or file a pull request! insecureor autopublishpackages. audit-argument-checksto check this automatically. profilefield on user documents. this.userIdinside Methods and publications. © 2011–2017 Meteor Development Group, Inc. Licensed under the MIT License.
http://docs.w3cub.com/meteor~1.5/security/
2017-09-19T15:15:10
CC-MAIN-2017-39
1505818685850.32
[]
docs.w3cub.com
In this article, we show how to use the Azure portal to setup Azure Cosmos DB global distribution and then connect using the MongoDB API. This article covers the following tasks: - Configure global distribution using the Azure portal - Configure global distribution using the MongoDB API You can learn about Azure Cosmos DB global distribution in this Azure Friday video with Scott Hanselman and Principal Engineering Manager Karthik Raman. For more information about how global database replication works in Azure Cosmos DB, see Distribute data globally with Cosmos DB. Add global database regions using the Azure Portal Azure Cosmos DB is available in all Azure regions world-wide. After selecting the default consistency level for your database account, you can associate one or more regions (depending on your choice of default consistency level and global distribution needs). - In the Azure portal, in the left bar, click Azure Cosmos DB. - In the Azure Cosmos DB blade, select the database account to modify. - In the account blade, click Replicate data globally from the menu. In the Replicate data globally blade, blade in the portal. You can use this option to test the failover process or change the primary write region. Once you add a third region, the Failover Priorities option is enabled on the same blade to deploy both the application and add Azure Cosmos DB in the regions thats correspond to where the application's users are located. For BCDR, it is recommended to add regions based on the region pairs described in the Business continuity and disaster recovery (BCDR): Azure Paired Regions article. Verifying your regional setup using the MongoDB API The simplest way of double checking your global configuration within API for MongoDB is to run the isMaster() command from the Mongo Shell. From your Mongo Shell: db.isMaster() Example results: { "_t": "IsMasterResponse", "ok": 1, "ismaster": true, "maxMessageSizeBytes": 4194304, "maxWriteBatchSize": 1000, "minWireVersion": 0, "maxWireVersion": 2, "tags": { "region": "South India" }, "hosts": [ "vishi-api-for-mongodb-southcentralus.documents.azure.com:10255", "vishi-api-for-mongodb-westeurope.documents.azure.com:10255", "vishi-api-for-mongodb-southindia.documents.azure.com:10255" ], "setName": "globaldb", "setVersion": 1, "primary": "vishi-api-for-mongodb-southindia.documents.azure.com:10255", "me": "vishi-api-for-mongodb-southindia.documents.azure.com:10255" } Connecting to a preferred region using the MongoDB API The MongoDB API enables you to specify your collection's read preference for a globally distributed database. For both low latency reads and global high availability, we recommend setting your collection's read preference to nearest. A read preference of nearest is configured to read from the closest region. var collection = database.GetCollection<BsonDocument>(collectionName); collection = collection.WithReadPreference(new ReadPreference(ReadPreferenceMode.Nearest)); For applications with a primary read/write region and a secondary region for disaster recovery (DR) scenarios, we recommend setting your collection's read preference to secondary preferred. A read preference of secondary preferred is configured to read from the secondary region when the primary region is unavailable. var collection = database.GetCollection<BsonDocument>(collectionName); collection = collection.WithReadPreference(new ReadPreference(ReadPreferenceMode.SecondaryPreferred)); Lastly, if you would like to manually specify your read regions. You can set the region Tag within your read preference. var collection = database.GetCollection<BsonDocument>(collectionName); var tag = new Tag("region", "Southeast Asia"); collection = collection.WithReadPreference(new ReadPreference(ReadPreferenceMode.Secondary, new[] { new TagSet(new[] { tag }) })); DocumentDB APIs You can now proceed to the next tutorial to learn how to develop locally using the Azure Cosmos DB local emulator.
https://docs.microsoft.com/en-us/azure/cosmos-db/tutorial-global-distribution-mongodb
2017-09-19T15:17:27
CC-MAIN-2017-39
1505818685850.32
[]
docs.microsoft.com
This document contains information for an outdated version and may not be maintained any more. If some of your projects still use this version, consider upgrading as soon as possible. Changelogs Keep up to date with new releases subscribe to the SilverStripe Release Announcements group, or read our blog posts about releases. We also keep an overview of security-related releases. For information on how to upgrade to newer versions consult the upgrading guide. Stable Releases - 2.4.11 - 2013-08-08 -.4.0 - 2.3.11 - 2 February 2011 - 2.3.10 - 21 December 2010 - 2.3.9 - 11 November.3.0 - 23 February 2009 - 2.2.4 - 20 March 2009 - 2.2.3 - ~31 October 2008 - 2.2.2 - 22 May 2008 - 2.2.1 - 21 December 2007 - 2.2.0 - 28 November 2007 - 2.1.1 - 2 November 2007 - 2.1.0 - 2 October 2007 - 2.0.2 - 14 July 2007 - 2.0.1 - 17 April 2007 - 2.0.0 - 3 February 2007 (initial release) Alpha/beta/release candidate ## -
https://docs.silverstripe.org/en/2.4/changelogs
2017-09-19T15:28:21
CC-MAIN-2017-39
1505818685850.32
[]
docs.silverstripe.org
The file format for these SST cubes is as follows. Note that this is gleamed from the header and having reverse enginered the file format. Each file has a header that is 512 bytes long and then the rest is binary data. The shape of the binary data varies between the icube and spcube, however, it is the same data as far as I can tell. The shape of the icube data is (nx,ny,bt). The shape of the spcube data is (). CRISPEX uses this file for a spectral view. However, I cannot see why. If anyone knows it would be very helpful. These shortnames are explained in the Header section below. Icubeheader : nx, stokes, endian, dims, datatype, ns, nt, and ny. spcube header: nx, dims, ny, datatype, endian, and nt. Headers are 512 bytes long for each cube. If a cube has only one wavelength, the head in has nx for the sp cube as 4. This could be a mistake in the reduction pipeline of the data I have. Thus this could be fixed in the latest pipeline. for the spcube file ny is time and nt is nx times ny. Header —— icube header: spcube header:
http://docs.sunpy.org/projects/sunkit-sst/en/latest/format.html
2017-09-19T15:21:41
CC-MAIN-2017-39
1505818685850.32
[]
docs.sunpy.org
This part of the reference documentation covers Spring Framework’s support for the presentation tier (and specifically web-based presentation tiers) including support for WebSocket-style messaging in web applications. Spring Framework’s own web framework, Spring Web MVC, is covered in the first couple of chapters. Subsequent chapters are concerned with Spring Framework’s integration with other web technologies, such as JSF. Following that is coverage of Spring Framework’s MVC portlet framework. The section then concludes with comprehensive coverage of the Spring Framework Chapter 26, WebSocket Support (including Section 26.4, “STOMP Over WebSocket Messaging Architecture”).
https://docs.spring.io/spring/docs/current/spring-framework-reference/html/spring-web.html
2017-09-19T15:28:37
CC-MAIN-2017-39
1505818685850.32
[]
docs.spring.io
How to: Install SQL Server Compact Edition on a Device If you use Microsoft Visual Studio to build a .NET application that uses Microsoft SQL Server 2005 Compact Edition (SQL Server Compact Edition), the first time that you deploy the application to a device, the SQL Server Compact Edition engine is automatically installed on the device. You can also install SQL Server Compact Edition to a device by manually copying the .cab files to the device. If you are building a native application, you must manually copy the .cab files. To manually install SQL Server Compact Edition>\ Note You only need to install sqlce30.repl.platform.processor.cab if your application uses merge replication or remote data access. See Also Tasks How to: Install Query Analyzer (SQL Server Compact Edition) Concepts Installing and Deploying on a Device (SQL Server Compact Edition) Other Resources Installing SQL Server Compact Edition Help and Information Getting SQL Server Compact Edition Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms171875(v=sql.90)
2018-05-20T16:32:58
CC-MAIN-2018-22
1526794863626.14
[]
docs.microsoft.com
bool OEReadCSVFile(oemolistream &ifs, OEMolBase &mol, unsigned int flavor = OEIFlavor::CSV::DEFAULT) Read the next molecule from a comma-separated-value, CSV, file opened by ifs into mol. The specific layout of the file is described in the CSV File Format section. The flavor argument is a value OR’d together from the values in the OEIFlavor.CSV namespace. Returns true if a molecule was successfully read, false otherwise.
https://docs.eyesopen.com/toolkits/csharp/oechemtk/OEChemFunctions/OEReadCSVFile.html
2018-05-20T15:47:23
CC-MAIN-2018-22
1526794863626.14
[]
docs.eyesopen.com
NSSetQuantumClock (Transact-SQL) Resets the Microsoft SQL Server Notification Services application quantum clock to the start time of a previous quantum. A new quantum is created with the same UTC (Coordinated Universal Time or Greenwich Mean Time) start time as the quantum specified in the stored procedure. This allows you to replay past quanta. NS$instance_name service is running and the instance is enabled. Syntax [ schema_name . ] NSSetQuantumClock [ @QuantumId = ] quantum_ID Arguments - [ @QuantumId =] quantum_ID Is the unique identifier of a past quantum. quantum_id is int and has no default value. Return Code Values 0 (success) or 1 (failure) Result Sets None Remarks Notification Services creates the NSSetQuantumClock. In the application definition file (ADF), ensure that the ChronicleQuantumLimit and SubscriptionQuantumLimit values are zero, which means there is no limit to how far back you can process quanta. If nonzero limits are specified,. Permissions Execute permissions default to members of the NSGenerator and NSRunService database roles, db_owner fixed database role, and sysadmin fixed server role. Examples The following example shows how to reset the quantum clock to replay previous quanta, starting at quantum number 1. A new quantum is entered into the NSQuantum1 table with a new quantum number, but with a StartTime value equal to the StartTime value of quantum 1. The application uses the default SchemaName settings, which places all application objects in the dbo schema. EXEC dbo.NSSetQuantumClock @QuantumId = 1; For example, if four quanta currently exist in the NSQuantum1 table, and quantum 1 started at 2002-05-23 17:23:37.640, when you run this example, quantum 5 is entered with a start time of 2002-05-23 17:23:37.640. When you enable the generator, the generator replays all quanta starting at quantum 1. See Also Reference Notification Services Stored Procedures (Transact-SQL) NSSetQuantumClockDate (Transact-SQL) Other Resources Notification Services Performance Reports SchemaName Element (ADF) Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms179918(v=sql.90)
2018-05-20T16:52:25
CC-MAIN-2018-22
1526794863626.14
[]
docs.microsoft.com
Patterns for instances using domain separation In instances that use domain separation, patterns may be domain-specific, covering only domains that you created them for, or global, applying to all domains. This information is relevant only for instances that use domain separation..
https://docs.servicenow.com/bundle/istanbul-it-operations-management/page/product/service-mapping/concept/c_DomSepPatterns.html
2018-05-20T15:40:12
CC-MAIN-2018-22
1526794863626.14
[]
docs.servicenow.com
You can verify your vRealize Operations for Horizon installation using the Horizon Adapter Self Health dashboard. The Horizon Adapter Self Health dashboard shows health information for the Horizon adapters and broker agents in your installation. Prerequisites Install and configure vRealize Operations for Horizon. Statistics widget to view metrics for the selected adapter. The Horizon Adapter Statistics Event DB "Troubleshooting a vRealize Operations for Horizon Installation" in the VMware vRealize Operations for Horizon Administration document for troubleshooting tips.
https://docs.vmware.com/en/VMware-vRealize-Operations-for-Horizon/6.5/com.vmware.vrealize.horizon.install/GUID-2DDE8CCB-0FCD-419C-96CB-43C6BC6130CE.html
2018-05-20T15:28:04
CC-MAIN-2018-22
1526794863626.14
[]
docs.vmware.com
Example 1—Permitting Access to All Members In this example, a database role has read permission to all cells in a cube. Reviewing the Result Set Based on these cell data permissions for this database role, a query on all cells returns the result set shown in the following table. Important If a Microsoft Windows user or group belongs to multiple database roles, a query on all cells would first result in a dataset being generated based on each database role to which the user or group belongs. Then, Microsoft SQL Server 2005 Analysis Services (SSAS) would combine of all these datasets into one dataset, and return that combined dataset to the user or group. See Also Concepts Granting Custom Access to Cell Data Example 2—Permitting Access to a Single Member Example 3—Denying Access to a Single Member Example 4—Limiting Access to a Member and its Descendants Example 5—Giving Access to a Specific Measure Within a Dimension Example 6—Excluding Selected Measures from a Dimension Example 7—Making Exceptions to Denied Members Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms174904(v=sql.90)
2018-05-20T16:21:54
CC-MAIN-2018-22
1526794863626.14
[]
docs.microsoft.com
Data policy fields These fields appear on the Data Policy form and related forms. Table 1. Data policy fields Field Description Table The table to which this policy applies. Note: The list shows only tables and database views that are in the same scope as the data policy. Application Application that contains this data policy. Inherit If selected, applies this data policy to tables that extend the specified table. For example, incident, problem, and change tables all extend the task table, therefore selecting Inherit on a data policy defined for task would apply the data policy to them as well. Reverse if false If selected, the data policy action is reversed when the conditions evaluate to false. For example, when the conditions are true, then actions are taken and when they change to false, the actions are reversed. Active If selected, the data policy is used. Short description A short description that identifies the policy. Description A detailed description of the policy. Apply to import sets If selected, the data policy applies to data brought into the system from import sets. This option also applies to web service import sets. Apply to SOAP If selected, the data policy applies to data brought into the system from a SOAP web service. Scripted SOAP web services are not affected. This field does not affect data policy interaction with REST web services. Use as UI Policy on client If selected, enforces the data policy on the UI using the UI policy engine. Table 2. Data policy rule fields Field Description Table The table on which the data policy action applies. Field name The field from the specified table to which the data policy will apply. Read Only How the data policy affects the read only state of the field. Choices are: Leave alone True False Mandatory How the data policy affects the mandatory state of the field. Choices are: Leave alone True False Note: For tables that are in a different scope than the data policy record, you cannot make a field mandatory.
https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/administer/field_administration/reference/r_DataPolicyFields.html
2018-05-20T15:32:32
CC-MAIN-2018-22
1526794863626.14
[]
docs.servicenow.com
Decision Insight 20180319 End-user This page is for end-users of Axway Decision Insight (DI) who were previously working with the Flex dashboards. This page lists the main changes that you now get when working with the HTML view About the end-user role The end-user is the user in charge of monitoring the dashboards from the DI application. HTML changes In HTML, the user experience has been redesigned to improve the product. The following table lists the main changes. Context Impact Comments User profile edition No longer supported User profile information is managed by your administrator or by an external repository (LDAP, SSO). Dashboard comments No longer supported Since the URLs reflects the current application state (see Dashboard browsing URL format), you can easily share these URLs with other users. Favorites dashboards No longer supported Since the URLs reflects the current application state (see Dashboard browsing URL format), you can use the bookmarking capability of your browser. Dashboard parameters No longer supported The functions that accept multiple inputs (for example, "In") are not available for the CalendarDuration, Duration and Instant types. No longer supported If a dashboard hyperlink using a classifier or a boolean parameters is configured to always bring no result, it is considered as invalid, and is ignored.e.g. A query result filtered using a boolean value AND(false) will always be empty. A query result filtered using a classifier LOWER THAN (LOW) will always be empty as well if LOW is the smallest value. UX change The instances listed in an Entity parameter drop-down can now be filtered and selected through an autocomplete search field. UX change The available values for classifier and boolean parameters are displayed as a multi-selection list. It doesn't require the function selection to set the parameter value. Acknowledge mashlet UX limitation The font size and color of the comment cannot be changed. Pagelet "Dashboard listing" UX change The list of dashboards is exclusively accessible from the All dashboards screen. Pagelet "Search by Criteria" UX change The new layout of a dashboard is made of an area for mandatory parameters and an area for filters (a.k.a. non-mandatory parameters) that makes the Search by Criteria pagelet obsolete. Relation Editor mashlet UX change If related entity has a classifier attribute in its key, it will be represented as a label, not an icon. Pie chart mashlet UX change The values are displayed from 12 o'clock clockwise while in FLEX it is from 3 o'oclock counter-clockwise. Related Links
https://docs.axway.com/bundle/DecisionInsight_20180319_allOS_en_HTML5/page/end-user.html
2018-05-20T15:40:37
CC-MAIN-2018-22
1526794863626.14
[]
docs.axway.com
Decision Insight 20171120 Decision Insight Messaging System What is Decision Insight Messaging System ? The Decision Insight Messaging System is the only recommended way to send messages between two Decision Insight applications in a safe and reliable way. Data is sent from one application to the other via messages. Messages are the fundamental unit of processing. Usage There are two possible ways to configure connection to a DIMS cluster: using a DIMS connector: used to configure DIMS cluster's servers and TLS configuration (this is the recommended way to configure a DIMS connection) directly in the uri of the routing context: DIMS cluster's servers and TLS configuration reference are set directly in routing context. Create a connector Select DIMS class type to create a DIMS connector. Available properties Property Description servers Comma separated list of Decision Insight Messaging Server members of the targeted cluster. The format is serverName:port sslContextParameters Refers to the SSL configuration from a connector. See How to configure SSL on a component for Mutual Authentication . You can create a data integration property named DIMS_SERVERS with the list of messaging servers to interact with. "DIMS_SERVERS" property for a DIMS cluster of 3 nodes with default port configuration server-a.acme.int:9092,server-b.acme.int:9092,server-c.acme.int:9092 Sending a message To publish messages to Decision Insight Messaging System, you will create a route similar to: Integration Node <routes xmlns="" xmlns: <route> <!-- Read from JMS --> <from uri="jms:queue:test?connectionFactory=activemq"/> <!-- Process the message --> <!-- ... --> <!-- Set the body --> <setBody> <u:map-create> <u:map-entry <!-- Extract the payment id --> </u:map-entry> <u:map-entry <!-- Extract the amount--> </u:map-entry> <u:map-entry <!-- Extract when this payment update occurred --> </u:map-entry> <u:map-entry <!-- Extract the account name--> </u:map-entry> </u:map-create> </setBody> <!-- Set the message key --> <setHeader headerName="DIMS.KEY"> <simple>${body[accountName]}</simple> </setHeader> <to uri="dims:DimsConnector?topic=payments"/> <!-- Equivalent configuration without connector usage <to uri="dims:{{DIMS_SERVERS}}?topic=payments&sslContextParameters=#sslcp"/> --> </route> </routes> Line 5, read from JMS. You can replace this with any source. Line 9, the message body must be a Map. Line 26, we provide a message key. If you don't need to enforce message order, then you can remove this <setHeader> block. Line 29, forward the message to the Messaging System. When using JMS, the message is automatically acknowledged at the end of the processing (here after the <to uri="dims..." />). Most sources work the same way. Where the following URI options are available: Name Required Default value Description uri yes none DIMS connector or Comma separated list of Decision Insight Messaging Server members of the targeted cluster (format is serverName:port). topic yes none Topic on which to publish messages sslContextParameters yes (if not provided by DIMS connector) none Refers the SSL configuration from a connector.See How to configure SSL on a component with mutual authentication . retries no 0 Setting a value greater than zero will cause the client to resend data for which send fails. maxRequestSize no 1048576 (Advanced) The maximum size a request (message or batch of messages) can have. Tune this setting if you expect to send huge messages. Messages are ordered within a partition. Messages with the same DIMS.KEY will go to the same partition, so their order will be preserved, while messages with different keys received in rapid succession (in a 5 second time-frame) may not preserve their order. Omitting the DIMS.KEY header allows messages to be distributed randomly across partitions. Receiving a message To receive messages from Decision Insight Messaging System, you will create a route similar to: Application Node <routes xmlns="" xmlns: <route> <!-- Read from DIMS --> <from uri="dims:DimsConnector?topic=payments&offsetRepository=#payments&groupId=application"/> <!-- Equivalent configuration without connector usage <from uri="dims:{{DIMS_SERVERS}}?topic=payments&sslContextParameters=#sslcp&offsetRepository=#payments&groupId=application"/> --> <!-- Absorb directly --> <to uri="tnd-absorption:updatePayment"/> </route> </routes> Line 7, send data to a mapping for absorption. See Configure mappings for information on how to configure mappings. If absorption is not your goal, you may replace this with any destination. Where the following URI options are available: Property Mandatory Default value Description uri yes none DIMS connector or Comma-separated list of Decision Insight Messaging Server members of the targeted cluster (format is serverName:port). topic yes none Topic to use sslContextParameters yes (if not provided by DIMS connector) none Refers the SSL configuration from a connector.See How to configure SSL on a component with mutual authentication. groupId yes none Id of consumer group. Each consumer group will receive a copy of the message. Offset management offsetRepository yes none Offset repository to use to store the current progress. Set the name of a map state to use it and resume when re-starting the route. See How to configure states. startFrom no latest Position to start consuming message if the last known offset was purged there is no last known offset Can be set to consume the earliest or the latest available messages. To fail consumption and throw an exception, use none. consumersCount no 1 The number of consumers that connect to the Decision Insight Messaging Server. Related Links
https://docs.axway.com/bundle/DecisionInsight_20171120_allOS_en_HTML5/page/decision_insight_messaging_system.html
2018-05-20T15:45:21
CC-MAIN-2018-22
1526794863626.14
[]
docs.axway.com
Hierarchical groups The Hierarchical Groups feature extends row-level security where the parent group has access to the data that belongs to its child groups. The following image shows parent/child relationships in a hierarchical group. For more information about hierarchical groups, see the following topics: - Using a parent group for permissions inheritance - Controlling access to requests for Hierarchical groups - Configuring settings E-M - Assigning permissions for individual or multiple objects CMDB attributes and groups that define permission for hierarchical groups During installation or upgrade, the following new attributes and groups are added silently to all the root-level classes that define hierarchical permissions. - CMDBWriteSecurity_Parent(Field ID: 60914) - CMDBRowLevelSecurity_Parent(Field ID: 60989) Note: Child classes inherit the above fields from the parent class. - Dynamic groups with group IDs 60914 and 60989. Note: The permissions list of the RequestId field, includes the two new permissions consisting of the dynamic groups. The mapping between the permission field and the associated parent group field is displayed on each form in the Dynamic Permission Inheritance field under permissions tab. For example, If the permission field is 112, the associated parent group field is 60989. On the form other than the CMDB class forms, for example the Product Catalog forms, the 112 permission field and the associated parent group field 60989 is seen along with RequestId and permission mapping (if applicable).
https://docs.bmc.com/docs/ac91/hierarchical-groups-611385784.html
2019-08-17T18:16:14
CC-MAIN-2019-35
1566027313436.2
[]
docs.bmc.com
Self Service via the Service Catalog¶ Scalr provides Cloud Deployment Self Service via the Service Catalog. This capability enables simple deployment of even the most complex cloud based services and applications at the click of a button. The Service Catalog is available to all types of users of Scalr including development teams, QA and non-technical end-users. In the case of non-technical end users Scalr can be configured to provide only the Service Catalog functionality by associating end user teams with restricted Access Control Lists. This results in end users seeing a very limited set of menu options. The Service Catalog provides a list of predefined offerings that contain all the necessary configuration details to create Applications and deploy them to the Cloud. Users can select an offering from the list and then request to create an application from it. The user will be prompted for various input according to configuration before the Application is launched. The Service Catalog itself is accessed from the Main Menu. Clicking on itself opens the list of available offerings to allow users to request applications. “My Applications” opens the screen of already created applications. Requesting Applications¶ The Service Catalog selection screen will, by default, show all available offerings. Offerings can optionally be grouped into Categories by the person who sets up the Service Catalog. Users can refine the list of available offerings by selecting a category from the search bar drop down and/or by typing filter text into the search bar. Click on the button to start the Application creation dialogue. In the simplest case users may only be asked to provide a name for the new application before reviewing and launching. At the other extreme the user could be asked to enter a number of choices and detail. The amount of input required is governed by the configuration of the offering and any Policies that are in place for the Environment. The following screens walk through an example where significant user input is required. Set the name and Project Choose the Cloud, Location and cloud specific parameters Note that the input parameters are tailored to the chosen cloud. Note how mandatory parameters can be highlighted in red if incomplete or invalid entries are submitted. Review and Launch Review the deployment parameters and then Create and Launch. If any amendments are required click on the appropriate “STEP” tab on the left hand side to make the changes. Managing Applications¶ After launching the application the “MY APPLICATIONS” will be displayed and will show the status of the Application request. This screen can also be accessed from the . From here applications can be Started, Suspended, Terminated (Stop), Deleted and Locked. The Delete and Start buttons are only visible when an application is in a Stopped stated. The Lock button disables the the other available buttons to minimise the risk of accidentally changing the state of an application. Click on the Application to see more details including a list of the Servers that are being launched and their IP Addresses. Access to the dashboard for each server is through the button alongside each server in the list. This gives access to full details of each server, including the Health statistics.
http://docs.scalr.com/en/latest/service_catalog/using_service_catalog.html
2019-08-17T16:57:34
CC-MAIN-2019-35
1566027313436.2
[array(['../_images/sc_user_menu.png', '../_images/sc_user_menu.png'], dtype=object) array(['../_images/sc_select.png', '../_images/sc_select.png'], dtype=object) array(['../_images/simple_2_general.png', '../_images/simple_2_general.png'], dtype=object) array(['../_images/review.png', '../_images/review.png'], dtype=object) array(['../_images/apps_prov.png', '../_images/apps_prov.png'], dtype=object) array(['../_images/sc_user_menu.png', '../_images/sc_user_menu.png'], dtype=object) array(['../_images/app_svr_detail.png', '../_images/app_svr_detail.png'], dtype=object) array(['../_images/svr_dash.png', '../_images/svr_dash.png'], dtype=object)]
docs.scalr.com
Exporting Layout Images T-SBADV-012-003 The Export Layout window lets you export some or all of the scenes in your project to layout images. This can be used to properly position scene elements when working on different aspects of the scene throughout production. For example, a layout can be imported in Harmony to accelerate its set up and properly position the elements and camera keyframes of the scene. Also, a layout exported to .psd format can serve as the base to create the background art for the scene so that it. - Select File > Export > Layout. The Export Layout Options panel, select the options you want to use for the exported layout images: -. - To view the location and contents of the exported folder when it is ready, select the Open folder after export option. - Click on the Export button.
https://docs.toonboom.com/help/storyboard-pro-6/storyboard/export/export-layout.html
2019-08-17T17:48:38
CC-MAIN-2019-35
1566027313436.2
[array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/SBP/Export/export-layour.png', None], dtype=object) ]
docs.toonboom.com
Conda Python API¶ As of conda 4.4, conda can be installed in any environment, not just environments with names starting with _ (underscore). That change was made, in part, so that conda can be used as a Python library. There are three supported public modules. We support: import conda.cli.python_api import conda.api import conda.exports The first two should have very long-term stability. The third is guaranteed to be stable throughout the lifetime of a feature release series--i.e. minor version number. As of conda 4.5, we do not support pip install conda. However, we are considering that as a supported bootstrap method in the future. Contents:
https://docs.conda.io/projects/conda/en/master/api/index.html
2020-01-17T16:19:09
CC-MAIN-2020-05
1579250589861.0
[]
docs.conda.io
Causal. Causal Consistency Side Effects When the Causal Consistency option is enabled, each CRDB instance should maintain and relay the order of operations it received from another CRDB instance to all other N-2 CRDB instances, where N represents the number of instances used by the CRDB. As a result, network traffic is increased by a factor of (N-2). The memory consumed by each CRDB instance and overall performance are also impacted when Causal Consistency is activated. Enabling Causal Consistency During the creation of a CRDB, the Causal Consistency parameter should be set as illustrated in the figure below: Once enabled, additional operations to enable or disable can only be performed using the REST API or the crdb-cli tool. In this case, the updated CRDB behavior happens only for commands and operations received after the change.
https://docs.redislabs.com/latest/rs/administering/database-operations/causal-consistency-crdb/
2020-01-17T17:09:19
CC-MAIN-2020-05
1579250589861.0
[array(['../../../../images/rs/create_db_causal.png?width=1050&height=930', 'create_db_causal'], dtype=object) ]
docs.redislabs.com
Contact Form Element What is a Contact Form? Contact Forms are designed to collect personal information from respondents. This information is usually comprised of: - Name - Company/ Job/ Occupation - Address, City/Town, State/Province/ County - Zip Code/ Postcode - Country - Phone Number Our Contact Form skeleton will contain a box to receive each of these information types. However, you will be able to chop and change what information is required. Why use the Contact Form? Using the Contact Form, you can collect all the personal information on your respondents you may need. Participants may be hesitant to give up this information, so it's probably best to inform them of what your intent to do with their information. Tip: If you intend to use this information for purposes other than collating respondent demographic information, you may also want to include a Consent Form in your project. How to create a Contact Form - Create a Survey, Form, or Quiz - Click ‘Add Page Items’ in the sidebar OR Scroll to the bottom of your survey and click the ‘Add Items’ button - Drag and Drop Contact Form into the necessary place OR Click Contact Form and then ADD TO PAGE - Give the Contact Form a title - Assign headings to the individual text fields - Navigate the Question Quick Menu for additional settings and customization Default Settings The default settings for Contact Forms will display ten text boxes when the question type is inserted. These include: - Name - Company - Address - Address 2 - City/ Town - State/ Province - Zip/ Postal Code - Country - Phone Number Each text box aims to collect a separate, and relevant, piece of information about your respondents. These boxes can be customized to collate most types of information, as long as the space required is relatively small. To rename these sections, simply click the text box and replace the text. Each text field will be automatically made a required field, but you can toggle this feature through the settings menu for each individual text box. Question Quick Menu Inside the Question Quick Menu for Contact Forms, you will find these options: - Undo: Reverse your previous action or measure. - Redo: Once ‘Undo’ is clicked, the option transforms into the ‘Redo’ button. This will restore your previous action or measure. - Delete: Permanently removes the question from the project. - Copy: Duplicate your question inside of the project. - Move to Page: Select another page to move the question to. - Settings: Opens additional, question specific, features and settings. Question Settings Menu The Settings button, in the Question Quick Menu, will grant you access to the following settings: - Number this question: This button toggles the numbering for the concerned question. - Address Lookup: This option allows respondents to search for their address using a Zip Code/ Postal Code. This feature is only available in the United Kingdom and United States of America. How respondents interact with the Contact Form Participants will be able to fill out each section of the contact form in the same way as Text Box (Single) questions. If you've activated the Address Lookup feature, you may find that some sections of your Contact Form disappear. These sections will reappear once the respondent has entered their zip code/ postcode and searched for their address. Once the code is entered, a dropdown project. How to analyze the Contact Form Quick Report Contact details will be compiled as a list of text responses in your Quick Report. They will be presented in the order they were collected for easy reference.
https://docs.shout.com/article/41-contact-form-element
2020-01-17T17:25:09
CC-MAIN-2020-05
1579250589861.0
[]
docs.shout.com
Product Index Create cool 3D models in Hexagon and create an income while having fun! Whether you're dreaming of making cars, otufits for genesis, buildings, organic shapes or sci-fi… It's all here… Step by step, from your first steps, to complete detailed DAZ Studio / Poser models that you can sell at DAZ 3D and create an income from home. Brought to you by bestselling DAZ 3D vendor, professional 3D artist, modeler, coach and mentor, Val Cameron a.k.a. Dreamlight. Featuring bestselling DAZ 3D vendor and professional modeler Jason White from 2.
http://docs.daz3d.com/doku.php/public/read_me/index/17512/start
2020-01-17T16:16:02
CC-MAIN-2020-05
1579250589861.0
[]
docs.daz3d.com
So what about the driver store? I was asked this recently and I thought it was an interesting question: I know I cant delete things out of the component store, but what about the driver store? Can I remove files from there without hurting anything? The answer to this is: YES. The driver store is a serviceable entity. Drivers that Microsoft authors, like NTFS.SYS for example, obviously live here and are updated. Does that mean that you need to keep every version of NTFS, or your old Nvidia or ATI drivers for that matter? Of course not. And for systems that are space sensitive, this would be one place to potentially see some impact on the bottom line when it comes to disk usage, especially if the machine were an upgrade or was in service for a while. So, how do you get rid of the old drivers? Personally, I always use pnputil.exe, it’s inbox and is fairly easy to use. Here’s an example: First, I would list out all of the drivers that are OEM. The reason being that this has the highest likelihood of producing duplicates over time as the drivers are updated. Here’s an example of my machine: pnpuitl -e Microsoft PnP Utility Published name : oem0.inf Driver package provider : Microsoft Class : Printers Driver date and version : 06/21/2006 6.1.7600.16385 Signer name : Microsoft Windows Published name : oem1.inf Driver package provider : NVIDIA Class : Display adapters Driver date and version : 06/09/2010 8.17.12.5849 Signer name : Microsoft Windows Hardware Compatibility Publisher From here, you can remove any oem.inf’s you find that might be extra, my machine is pretty clean, as you can see from the example, but if I had updated my Nvidia driver several times, I might have 3-4 of those sitting around. Once I have identified the driver I want to get rid of (let’s say the printer driver I have installed), I use the following command: pnputil –d oem0.inf It will remove the INF and the associated driver package in the store from the machine. For more on command syntax and usage, see the MSDN page here: Hope that helps, --Joseph
https://docs.microsoft.com/en-us/archive/blogs/joscon/so-what-about-the-driver-store
2020-01-17T17:41:15
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
Add the devtools plugin to your application so that your app and the devtools can communicate properly. You can find details here. Once you have installed the developer tools and turned them on as described above, they will appear to the side of your application when you start it up as normal. If you want to hide them, just toggle devtools to false. We are planning to add options to run the developer tools in a different way, for example as a browser extension. If that interests you, please let us know. This panel shows the complete current internal state of your application. It is interactive; if you change the state in the panel, your application will reflect the changes. This panel shows all the actions triggered in your application in chronological order. You can also find a lot of metadata about each action, for example origin, arguments, state patches, and any recorded effects, re-renders or next actions that were triggered. This panel helps you see which of your components got re-rendered, how often and why. You can use this to debug your application’s performance.
https://docs.prodo.dev/basics/devtools/
2020-01-17T16:01:42
CC-MAIN-2020-05
1579250589861.0
[]
docs.prodo.dev
Recently Viewed Topics Software Requirements Nessus supports Mac, Linux, and Windows operating systems. Nessus Scanner, Nessus Manager, and Nessus Professional See the following table to understand the software requirements for Nessus scanners, Nessus Professional, and Nessus Manager. Nessus Agents For Nessus Agent software requirements, see the Agent Software Requirements in the Nessus Agent User Guide. Supported Web Browsers Nessus supports the following browsers: - Google Chrome (50+) - Apple Safari (10+) - Mozilla Firefox (50+) - Internet Explorer (11+) Note: For Nessus 7.0 and later, you must enable Transport Layer Security (TLS) 1.2 in your browser..
https://docs.tenable.com/nessus/8_6/Content/SoftwareRequirements.htm
2020-01-17T15:33:37
CC-MAIN-2020-05
1579250589861.0
[]
docs.tenable.com
Versions Compared Old Version 6 changes.mady.by.user Nacho Medina Saved on New Version Current changes.mady.by.user Nacho Medina Saved on Key - This line was added. - This line was removed. - Formatting was changed.. CellBase 5.x Releases 5.0.0 (December 2019) You can track GitHub issues at GitHub Issues 5.0.0. You can follow the development at GitHub Projects General - Improve Test and verification - New Docker images - Upgrade dependencies: MongoDB 4.2, JUnit 5.5.x - Cleanups Build - Improve Variation build performance - Improve test coverage Databases - PLi/PLoF scores from ExAC and Gnomad - Add Gnomad v3 and TopMed frequencies Cloud - Add AWS CloudFormation and Azure ARM templates - Add Kubernetes for deployment and orchestration CellBase 4.x Releases CellBase 4.x is almost closed, however we are currently working on CellBase version 4.8.0 which will be the last release of 4.x. There are many other new features that will be available in future releases. A summary of current roadmap includes: - Knowledge-base improvements, mainly new clinical data - Variant annotation: improve structural variation annotation, extend population frequencies data and improve annotator performance - Improve automatic testing procedures for loaded data and variant annotation - Improve CellBase data accessibility: provide Python/R libraries that enable easy and intuitive programmatic access to CellBase. - General documentation improvements: from code to wiki documentation. Table of Contents:
http://docs.opencb.org/pages/diffpagesbyversion.action?pageId=15598684&selectedPageVersions=7&selectedPageVersions=6
2020-11-23T22:49:41
CC-MAIN-2020-50
1606141168074.3
[]
docs.opencb.org
When the local-kubernetes or kubernetes provider is used, container modules can be configured to hot-reload their running services when the module's sources change (i.e. without redeploying). In essence, hot-reloading copies syncs files into the appropriate running containers (local or remote) when code is changed by the user, and optionally runs a post-sync command inside the container. For example, services that can be run with a file system watcher that automatically updates the running application process when sources change (e.g. nodemon, Django, Ruby on Rails, and many other web app frameworks) are a natural fit for this feature. Currently, services are only deployed with hot reloading enabled when their names are passed to the --hot option via garden deploy or garden dev commands (e.g. garden dev --hot=foo-service,bar-service). If these services don't belong to a module defining a hotReload configuration (see below for an example), an error will be thrown if their names are passed to the --hot option. You can also pass * (e.g. --hot=*/ --hot-reload=*) to deploy all compatible services with hot reloading enabled (i.e. all services belonging to a module that defines a hotReload configuration). Subsequently deploying a service belonging to a module configured for hot reloading via garden deploy (without the watch flag) results in the service being redeployed in standard configuration. Since hot reloading is triggered via Garden's file system watcher, hot reloading only occurs while a watch-mode Garden command is running. Following is an example of a module configured for hot reloading: kind: Moduledescription: My Test Servicename: test-servicetype: containerhotReload:sync:- target: /app/services:- name: test-serviceargs: [npm, start] # runs `node main.js`hotReloadArgs: [npm, run, dev] # runs `nodemon main.js` In the above, the hotReload field specifies the destination path inside the running container that the module's (top-level) directory (where its garden.yml resides) is synced to. Note that only files tracked in version control are synced, e.g. respecting .gitignore. If a source is specified along with target, that subpath in the module's directory is synced to the target instead of the default of syncing the module's top-level directory. You can configure several such source/ target pairs, but note that the source paths must be disjoint, i.e. a source path may not be a subdirectory of another source path within the same module. Here's an example: hotReload:sync:- source: /footarget: /app/foo- source: /bartarget: /app/bar Lastly, hotReloadArgs specifies the arguments to use to run the container (when deployed with hot reloading enabled). If no hotReloadArgs are specified, args is also used to run the container when the service is deployed with hot reloading enabled A postSyncCommand can also be added to a module's hot reload configuration. This command is executed inside the running container during each hot reload, after syncing is completed (as the name suggests). Following is a snippet from the hot-reload-post-sync-command example project. Here, a postSyncCommand is used to touch a file, updating its modification time. This way, nodemon only has to watch one file to keep the running application up to date. See the hot-reload-post-sync-command example for more details and a fuller discussion. kind: Moduledescription: Node greeting servicename: node-servicetype: containerhotReload:sync:- target: /app/postSyncCommand: [touch, /app/hotreloadfile]services:- name: node-serviceargs: [npm, start]hotReloadArgs: [npm, run, dev] # Runs modemon main.js --watch hotreloadfile...
https://docs.garden.io/guides/hot-reload
2020-11-23T22:36:40
CC-MAIN-2020-50
1606141168074.3
[]
docs.garden.io
# What's New? Welcome to the Commandeer Release Notes page. Here you will find all releases of the Commandeer software along with detailed information about what was added or fixed. If you have any problems, please feel free to click the button to the right to report an issue on our GitHub open source page.
https://docs.getcommandeer.com/releases
2020-11-23T22:07:22
CC-MAIN-2020-50
1606141168074.3
[]
docs.getcommandeer.com
Django FileBrowser Documentation¶ Media-Management with Grappelli. Note FileBrowser 3.13.1 requires Django 3.0 and Grappelli 2.14..13.1 (May 15th, 2020): Compatible with Django 3.0 - FileBrowser 3.12.1 (November 14th, 2019): Compatible with Django 2.2 (LTS) - FileBrowser 3.9.2 (November 2nd, 2018): Compatible with Django 1.11 (LTS) Current development branches: - FileBrowser 3.13.1 (Development Version for Django = 3.0, see Branch Stable/3.13.x) - FileBrowser 3.12.2 (Development Version for Django = 2.2, see Branch Stable/3.12.x) - FileBrowser 3.9.3 (Development Version for Django = 1.11, see Branch Stable/3.9.x) Older versions are available at GitHub, but are not supported anymore. Support for 3.12.x and 3.9.x is limited to security issues and very important bugfixes.
https://django-filebrowser.readthedocs.io/en/latest/
2020-11-23T21:51:56
CC-MAIN-2020-50
1606141168074.3
[]
django-filebrowser.readthedocs.io