content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Advanced options¶
Subsample mode¶
Subsample mode simply throws away >90% of the data. This allows you
to quickly check whether your pipeline works as expected and the output files
have the expected format. Subsample mode should never be used in production.
To use it, pass the option
--subsample on the command line:
ngless --subsample script.ngl
will run
script.ngl in subsample mode, which will probably run much faster
than the full pipeline, allowing to quickly spot any issues with your code. A
10 hour pipeline will finish in a few minutes (sometimes in just seconds) when
run in subsample mode.
Note
subsample mode is also a way to make sure that all indices exist. Any
map() calls will check that the necessary indices are present: if a
fafile argument is used, then the index will be built if necessary; if
a
reference argument is used, then the necessary datasets are
downloaded if they have not previously been obtained.
Subsample mode also changes all your
write() so that the output
files include the
subsample extension. That is, a call such as:
write(output, ofile='results.txt')
will automatically get rewritten to:
write(output, ofile='results.txt.subsample')
This ensures that you do not confuse subsampled results with the real thing. | http://ngless.readthedocs.io/en/latest/advanced.html | 2018-07-15T22:50:48 | CC-MAIN-2018-30 | 1531676589022.38 | [] | ngless.readthedocs.io |
Adding new download formats¶
While the Aristotle-MDR framework has a PDF download extension, it may be desired to download metadata stored within a registry in a variety of download formats. Rather than include these within the Aristotle-MDR core codebase, additional download formats can be developed included via the download API.
Creating a download module¶
A download module is a specialised class, that sub-classes
aristotle_mdr.downloader.DownloaderBase
and provides an appropriate
download or
bulk_download method.
A download module is just a Django app that includes a specific set of files for generating downloads. The only files required in your app are:
__init__.py- to declare the app as a python module
downloader.py- where your download classes will be stored
Other modules can be written, for example a download module may define models for recording a number of times an item is downloaded.
Writing a
metadata_register¶
Your downloader class must contain a register of download types and the metadata concept types which this module provides downloads for. This takes one of the following forms which define which concepts can be downloaded as in the output format:
class CSVExample(DownloaderBase): download_type = "csv" metadata_register = {'aristotle_mdr': ['valuedomain']} class XLSExample(DownloaderBase): download_type = "xls" metadata_register = {'aristotle_mdr': ['__all__']} class PDFExample(DownloaderBase): download_type = "pdf" metadata_register = '__template__' class TXTExample(DownloaderBase): download_type = "txt" metadata_register = '__all__'
Describing these options, these classes specifies the following downloads:
csvprovides downloads for Value Domains in the Aristotle-MDR module
xlsprovides downloads for all metadata types in the Aristotle-MDR module
txtprovides downloads for all metadata types in all modules
Each download class must also define a class method with the following signature:
def download(cls, request, item):
This is called from Aristotle-MDR when it catches a download type that has been registered for this module. The arguments are:
request- the request object that was used to call the download view. The current user trying to download the item can be gotten by calling
request.user.
item- the item to be downloaded, as retrieved from the database.
Note: If a download method is called the user has been verified to have permissions to view the requested item only. Permissions for other items will have to be checked within the download method.
For more information see the
DownloaderBase class below:
- class
aristotle_mdr.downloader.
DownloaderBase[source]¶
Required class properties:
- description: a description of the downloader type
- download_type: the extension or name of the download to support
- icon_class: the font-awesome class
- metadata_register: can be one of:
- a dictionary with keys corresponding to django app labels and values as lists of models within that app the downloader supports
- the string “__all__” indicating the downloader supports all metadata types
- the string “__template__” indicating the downloader supports any metadata type with a matching download template
- classmethod
bulk_download(request, item)[source]¶
This method must be overriden and return a bulk downloaded set of items as an appropriate django response
How the
download view works¶
aristotle_mdr.views.downloads.
download(request, download_type, iid=None)[source]¶
By default,
aristotle_mdr.views.downloadis called whenever a URL matches the pattern defined in
aristotle_mdr.urls_aristotle:
download/(?P<download_type>[a-zA-Z0-9\-\.]+)/(?P<iid>\d+)/?
This is passed into
downloadwhich resolves the item id (
iid), and determines if a user has permission to view the requested item with that id. If a user is allowed to download this file,
downloaditerates through each download type defined in
ARISTOTLE_SETTINGS.DOWNLOADERS.
A download option tuple takes the following form form:
('file_type','display_name','font_awesome_icon_name','module_name'),
With
file_typeallowing only ASCII alphanumeric and underscores,
display_namecan be any valid python string,
font_awesome_icon_namecan be any Font Awesome icon and
module_nameis the name of the python module that provides a downloader for this file type.
For example, the Aristotle-PDF with Aristotle-MDR is a PDF downloader which has the download definition tuple:
('pdf','PDF','fa-file-pdf-o','aristotle_pdr'),
Where a
file_typemultiple is defined multiple times, the last matching instance in the tuple is used.
Next, the module that is defined for a
file_typeis dynamically imported using
exec, and is wrapped in a
try: exceptblock to catch any exceptions. If the
module_namedoes not match the regex
^[a-zA-Z0-9\_]+$
downloadraises an exception.
If the module is able to be imported,
downloader.pyfrom the given module is imported, this file MUST have a
downloadfunction defined which returns a Django
HttpResponseobject of some form. | http://aristotle-metadata-registry.readthedocs.io/en/latest/extensions/downloads.html | 2018-07-15T22:46:22 | CC-MAIN-2018-30 | 1531676589022.38 | [] | aristotle-metadata-registry.readthedocs.io |
What is the Splunk Community ?
The Splunk Community is comprised of two things—people and programs.
Splunk Community people
The Splunk Community is a group of customers, partners, and Splunk employees (Splunkers) who share their knowledge and experience with other users. These people volunteer their time to help others successfully implement and use Splunk products.
The Splunk Community volunteers have hands-on experience with Splunk products and services. They are real people who openly share how they have applied their experience to real-world use cases. Think of the Splunk Community as a strong, user-based support system.
Splunk Community programs
Splunk sponsors several Splunk Community programs:
- Splunk Answers, a question and answer forum
- Chat groups
- User groups
- SplunkTrust
How Splunk supports the Splunk Community
Because Splunk recognizes how important the Splunk Community is, Splunk has employees that are specifically assigned to assist and grow the Splunk Community.
This documentation applies to the following versions of Get Started with Splunk Community: 1.0
Feedback submitted, thanks! | http://docs.splunk.com/Documentation/Community/1.0/community/AboutCommunity | 2018-07-15T23:21:22 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Driver Verifier.
- Where can I download Driver Verifier?
- When to use Driver Verifier
- How to start Driver Verifier
- How to control Driver Verifier
- How to debug Driver Verifier violations
- Related topics
Where can I download Driver Verifier?
When to use Driver Verifier
Run Driver Verifier throughout the driver development and test process.
Use Driver Verifier to find problems early in the development life cycle, when they are easier and less costly to correct.
Use Driver Verifier when you deploy a driver for testing using the WDK, Visual Studio, and the Windows Hardware Certification Kit (HCK) tests. See Testing a Driver.. See Windows Debugging.
- Open a Command Prompt window (Run as administrator) and type verifier to open the Driver Verifier Manager.
Select Create standard settings (default) and click Next.
You can also choose Create custom settings to select from predefined settings, or to select individual options. See Driver Verifier Options and Selecting Driver Verifier Options for more information.
Select a driver or drivers to verify.
Click Finish and reboot the computer.
Note You can also run Driver Verifier in a Command Prompt window. For example, to run Driver Verifier with the standard settings on a driver called myDriver.sys, you would use the following command:
verifier /standard /driver myDriver.sys
See Driver Verifier Command Syntax for more information.
How to control Driver Verifier
To stop or reset Driver Verifier
- Open a Command Prompt window (Run as administrator) and type verifier to open the Driver Verifier Manager.
- Select Delete existing settings.
- Reboot the computer.
Or type the following command in a Command Prompt window and reboot the computer.
verifier /reset
To view Driver Verifier settings
- Open a Command Prompt window (Run as administrator) and type verifier to open the Driver Verifier Manager.
- Select Display existing settings.
Or type the following command in a Command Prompt window.
verifier /querysettings
To view Driver Verifier statistics
- Open a Command Prompt window (Run as administrator) and type verifier to open the Driver Verifier Manager.
- Select Display information about the currently verified drivers.
Or type the following command in a Command Prompt window.
verifier /query
How to debug Driver Verifier violations
To get the most benefit from Driver Verifier, you should use a kernel debugger and connect to the test computer. See Windows Debugging for more information.
If Driver Verifier detects a violation, it generates a bug check to stop the computer. This is to provide you with the most information possible for debugging the issue. When you have a kernel debugger connected to a test computer running Driver Verifier, if Driver Verifier detects a violation, Windows breaks into the debugger and displays a brief description of the error.
All Driver Verifier violations result in bug checks, the most common ones (although not necessarily all of them) are:
- Bug Check 0xC1: SPECIAL_POOL_DETECTED_MEMORY_CORRUPTION
- Bug Check 0xC4: DRIVER_VERIFIER_DETECTED_VIOLATION
- Bug Check 0xC6: DRIVER_CAUGHT_MODIFYING_FREED_POOL
- Bug Check 0xC9: DRIVER_VERIFIER_IOMANAGER_VIOLATION
- Bug Check 0xD6: DRIVER_PAGE_FAULT_BEYOND_END_OF_ALLOCATION
- Bug Check 0xE6: DRIVER_VERIFIER_DMA_VIOLATION
For more information see Handling a Bug Check When Driver Verifier is Enabled. For tips about debugging Bug Check 0xC4, see Debugging Bug Check 0xC4: DRIVER_VERIFIER_DETECTED_VIOLATION.
When you start a new debug session, use the debugger extension command !analyze. In kernel mode, the !analyze command displays information about the most recent bug check. The !analyze -v command displays additional information and attempts to pinpoint the faulting driver.
kd> !analyze -v
In addition !analyze, you can use the following debugger extensions to view information specific to Driver Verifier:
!verifier dumps captured Driver Verifier statistics. Use !verifier -? to display all of the available options.
kd> !verifier
!deadlock displays information related to locks or objects tracked by Driver Verifier's deadlock detection feature. Use !deadlock -? to display all of the available options.
kd> !deadlock
!iovirp [address] displays information related to an IRP tracked by I/O Verifier. For example:
kd> !iovirp 947cef68
!ruleinfo [RuleID] displays information related to the DDI compliance checking rule that was violated (RuleID is always the first argument to the bug check. All DDI Compliance Checking RuleID are in the form 0x200nn). For example:
kd> !ruleinfo 0x20005
Related topics
Driver Verifier: What's New
Driver Verifier Command Syntax
Controlling Driver Verifier | https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/driver-verifier | 2018-07-16T00:09:45 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.microsoft.com |
Out of date: This is not the most recent version of this page. Please see the most recent version
Blinky on the mbed Online Compiler mbed Enabled board. However, you have to select the target before compiling.
Adding a board to your list
To add a board to your list, go to the board’s page on” or “DAPLINK”, and its type is removable storage
Drag and drop your program to the board.
The board installs the program. Reset the board, and see the LED blink. | https://docs.mbed.com/docs/mbed-os-handbook/en/latest/getting_started/blinky_compiler/ | 2018-07-15T23:19:48 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.mbed.com |
The Days of Genesis Hill Roberts
Similar
Posters ( places in school, subject areas, months, days, colours, numbers, hello...
Kieran Roberts, Matthew Blaber...
Genesis
Abram’s Animal Ceremony in Genesis 15...
Mission Earth 02: Black Genesis...
Neon Genesis Evangelion: title to be decided...
Genesis provides an appropriate welcome to the Bible...
At The Battle of Bunker Hill...
Medical Hill, Göteborg, Sweden...
Oak Hill, Dec 11, 1927 7: 11 pm...
Flint hill fire department...
The Days of Blooming...
Загрузка...
страницы:
1
2
3
4
return to the beginning
скачать
Challenge 3:
The Fiat Days view makes it looks like God used evolution to accomplish creation!
Answer:
Then I have not clearly explained my belief in fiat
creation
. This misunderstanding could arise from the discussion of process being involved in creation, according to Genesis 2. (See response to Challenge 1.) Process implies time for sure, but the Fiat Days interpretation leaves open the type of processes used. Does this mean the Fiat Days view endorses the general evolutionary theory for life and man? God forbid --
not at all!
The process described for the creation of Adam seems to completely preclude the evolutionary hypothesis of human ascent from non-human ancestors. Adam was made from dirt, not animals, or putative pre-human ape-like forebears. Nor do Genesis 1 and 2 indicate that
any
form of life sprang from another pre-existing kind. Indeed, Genesis 1 is very explicit that all the kinds were to follow “after their kind.” This does not remotely resemble the general evolutionary theory of life forming itself from non-living chemicals, and then diversifying from that original primitive cell to become all forms of life on earth. There
are
hints of process in chapter 1 in the statements, “let the earth produce plants,” and “let the earth produce the beasts.” But these do not necessitate an
evolutionary
assumption at all. On the contrary, they indicate in simple language a divine initiation of the natural cycle of life for plants and animals following the laws of physics, chemistry and biology designed by God.
There is no macro-evolution
17
implicit to the Fiat Days view. Both the inspired text and God’s natural revelation indicate that macro-evolution is NOT one of the laws God established in His design. All new kinds come into being at the right moment only because God so spoke on days three, five and six; not because nature randomly evolved all living things willy-nilly from a single ancestral cell. This misunderstanding may also come from some young earth proponents dogmatically asserting that any and all old earth views of creation are tantamount to accepting atheistic evolution. This is pure demagoguery and fear mongering in the name of God. (See the article
“Young Earth or Old Earth Creationists: Can we even talk together?”
at note 2.)
Others charge that accepting the processes suggested by modern physics (
e.g.,
the Big Bang) as a candidate process God used to prepare the earth for His creative work is tantamount to accepting evolution. This is incorrect. The General Theory of Evolution is about how life came to be via
biological
processes. It does not address the formation and change of stars via processes of physics. The General Theory of Evolution is not dependent at all upon Big Bang cosmology. Indeed, Darwin formulated the Theory of Evolution in the early 1800’s well
before
the Big Bang was ever suggested in the middle of the twentieth century. If God chose to speak the entire energy-mass content of the universe into existence at the beginning of time, and then wait for that created energy-mass to do what it would by obeying the laws God designed for energy-mass (which means a spreading and cooling from infinite density energy-mass) – so be it. Nothing in Genesis 1 or the entire Bible speaks against such a view. “In the beginning God created the heavens and the earth.” That statement sounds disarmingly correct to modern cosmology.
18
The Big Bang
presumes
creation from nothing; it is not an alternative to
ex nihilo
19
creation at all. Rather, it affirms creation. The data upon which the Big Bang theory of cosmology is based, and indeed the theory itself, argues in a very dramatic and convincing manner for a singular beginning for the entire universe,
i.e.,
heavens and the earth.
20
Naturalistic atheists
hate
Big Bang cosmology precisely because it shows there indeed was a
beginning
to time, space, energy and matter.
21
Such a heavy theological finding is pure anathema to a philosophical naturalist/materialist. The philosophical implications of a universal beginning are
overwhelming
: a beginning of
everything
physical
requires a Beginner outside of everything physical. Such requires a
transcendent
Beginner with sufficient power and wisdom to be an adequate cause of this universe. In short, a Big Bang requires a transcendent god just like the one in the Bible who speaks from on high and causes it to come to pass by the power of His Word: Jehovah God, Eternal God, I AM God. And what does a God-decreed Big Bang of something-from-nothing get us? A wet formless void earth, just like we read of in Genesis 1:2, and just like we see for all the other planets we can observe in the universe. Without God speaking it into shape, we get nothing more for earth. Earth is where we really can begin to see the hand of God as the creator of our special world. It is a very unique little blue planet just right for God to work out His plans for mankind. It was indeed
very good
. (See Brown and Ward’s
Rare Earth
, for more insight into how unique the Earth is, especially regarding its suitability for intelligent life.)
Challenge 4:
If creation occurs over a long duration, how did plants live for so long without the sun? Nature has to be fully formed all at once, it can't come into existence spread out over eons.
^
Answer:
This challenge forgets that God decreed light for the formless earth on day one, and thereafter there was evening and morning. Thus plants did
not
have to survive for eons without light. Light was decreed on day one and a day-night cycle was operating on earth prior to the call for plants’ arrival.
22
However, it is very likely from earth science that the earth’s first atmosphere would have been too dense to actually
see
the sun, moon or stars. Hence, it would have been like a very overcast day in which day and night still occur, but no sun is visible. Or, like it is on Venus still today. On day four the sun, moon and stars are made to serve as timekeepers for years and seasons in addition to days, which had already began with day one. One possibility is that with the fiat of day four the sky was changed so the heavenly bodies could be observed and operate in a new and more direct manner upon earth’s surface in preparation for the calling forth of animal life beginning with the fiats of days five and six.
In the larger picture, this challenge assumes God can only create within some presupposed time constraint inferred from present day operations. This unnecessarily limits God. If God wants to stretch things out, the way the natural record indicates He did, then so be it. I would further suppose in that case that God knew what was best, rather than us.
23
The natural record also indicates that things seemed to work quite nicely coming into existence in an orderly fashion over eons. Apparently, this challenge isn't even a physical constraint, much less a divine constraint. An analogy might be made with building houses. All elements of a house depend on each other: the foundation supports the walls and roof, and the walls and roof protect the foundation, etc. However, it would be folly to argue that therefore houses must be built instantaneously. Instead, we understand that houses need to be built at a pace and in a sequence consistent with the materials and processes used to make a house according to its builder’s design. So it was with creation.
Imagine describing how to build a house in only 31 verses. Why would we suppose that Genesis 1 contains all the details for creating the whole universe? As already explained, that was not its purpose at all. Indeed, if this Fiat Day approach is off the mark anywhere, it is likely to be in trying to read Genesis 1 as having anything to do with what has been learned by observing nature. It never was intended to be a science text, not even a very primitive one. Instead, Genesis 1 assured the Israelites that all those things so readily observed to exist and so readily worshipped (the sun, stars, moon, animals, waters, land, skies, man) were instead all created and made by God according to His purposes and by the power of Him just saying so. It suited His purposes perfectly. It was very good.
Challenge 5:
It wouldn’t take a full day just for God to
say
, “Let there be light.”
Answer:
Correct. The Fiat Days view does not imply that a whole day is required by God to form any of the pronouncements. But turning this challenge about, it would be just as fair a criticism of the young earth interpretation to say that a whole day is not required for God to instantaneously create light, so the young earth twenty-four hour creation day suffers from its own challenge. No young earth proponent of consecutive twenty-four hour creation days understands that God
needed
six twenty-four hour days. For example, they argue that the light was created to be everywhere instantaneously. Thus begging the question of what did God do with the rest of the first twenty-four day? I believe it is better for both views to understand these statements in an entirely different manner. For example, if I say I was born in 1952, no-one understands that to mean I think it took a whole year for my birth. Likewise, when God said on day one “let there be light”, no-one should understand God to mean He
needed
twenty-four hours to say it.
Challenge 6:
When the Hebrew word
yom
is used with a number in scripture, it always means a twenty-four hour day. Therefore the creation days are twenty-four hour days, not long ages.
Answer:
First, the grammatical premise of this challenge is fatally flawed. There is no such rule of Hebrew grammar that supports this claim. One will not find such a rule in any standard text on Hebrew grammar. Further, there are certainly instances in the Bible where this claim is demonstrably false. For example, Deut. 10:10, as noted in Young’s concordance, uses the word
yom
with the cardinal “one” (exactly as in Genesis 1:5) to mean forty days, not one 24 hour day. Hence, it is translated “the first time” rather than “day one” which would make no sense in that verse. So
yom
with a number does not always mean a twenty-four hour day. There are other similar examples, but one example suffices to show the fallacy. (See 1 Sam 7:2, 1Chrn 29:27, Hosea 6:2, and Zech 7:14.) Of course there
are
many cases where day with a number means a calendar day since most such instances involve the “seventh day.” This of course refers to a particular day of the week. And since the seventh day was so prominent in Jewish life, this form (day + number) shows up often in scripture. But the fact that it
usually
means something doesn’t mean it must
always
mean that. This challenge is based upon a false rule of Hebrew grammar.
Second, this challenge is irrelevant to the Fiat Day view, since the Fiat Day view
accepts
that the usage of
yom
in Genesis 1 most probably means a calendar day based primarily upon the “evening to morning” formula.
Challenge 7:
Jesus’ marriage reference arguing “from the beginning of creation” in Mark 10:6 clearly shows that the creation was a very short period consistent with a week, not spread out over a long time.
Answer:
This is a fairly recently-voiced argument as far as I can find. No commentary I’ve researched seems to be aware that Jesus was really commenting on the young-earth/old-earth issue when He gave this teaching on the permanence of marriage. Uniformly among the commentators, His reference is simply taken to mean that ever since the very first marriage, God’s design for marriage was one man with one woman for life. The force of Jesus’ argument is carried by referencing the first
marriage
of male and female during the creation, not when the first
moment
of creation occurred. The parallel passage in Matthew 19:4 explicitly makes this point. So when Mark records “from the beginning of creation” in the context of God’s design for marriage, it is clearly seen that Jesus refers to Genesis 2:21-25, and clearly NOT Genesis 1:1.
This argument is self defeating if it is taken as literally as the argument presumes. If Jesus really meant that marriage existed from the
beginning
of creation, then Jesus was wrong, which is inconceivable. The
beginning
of creation is Genesis 1:1. The inspired text says so. Marriage is the very
event in the whole creation account, not the beginning event. It is part of the day-six events, as recorded in Genesis 2:23-24. If meant to be taken so literally regarding the span of the creation week, Jesus should have said from the
end
of creation, not from its beginning. For Jesus to be wrong by six days is just as huge a challenge as to be wrong by six millennia – but only if He meant it to be taken precisely in the first place, as this challenge incorrectly presumes.
Now, those who advocate this challenge argue that it’s okay for Jesus to speak of a mere six days as “the beginning,” but inconceivable that He would accommodate a span of billions of years as “from the beginning.” That of course presumes that God experiences the same time limitations as we do. But of course scripture explicitly tells us God does not experience time as we do. In particular, Psalms 90:1 and 2 Peter 3:8 both inform us that God indeed sees what are mind-boggling millennia from our view as mere days from His view. So, this challenge is explicitly defeated by scripture. Also consider that God
often
speaks of long time frames in terms that would seemingly indicate a very brief span. For example, in Act 2:16 Peter tells us at Pentecost that they were witnessing the fulfillment of the “last days” prophecy of Joel. Yet, instead of those last DAYS encompassing only a few literal days that surrounded Pentecost, the last days period is the whole period that merely began on Pentecost as Christ’s reign opens up the kingdom to receive sinners into Christ. What began with those first converts on that day, continues up to now, nearly two thousand years later. “Last days” equals thousands of years. So, it is no linguistic foul in this regard if “beginning days” also spans a much longer time. Therefore, the language of Mark 10:6 does not in any way constrain the length of God’s creation process which culminated with the first marriage referenced by Jesus as the basis for marriage union ever since. Beginning days, last days – what a beautiful parallel. Unfortunately, just as the Jews and even early Christians mistook the prophecies of last days and judgment as being very immediate and short term, so now many mistake the language of creation as meaning it must have all been over and done with in a matter of hours. We make the same interpretive mistake on both ends of the time spectrum.
Today, when we want to cite book, chapter and verse we use the system published in today’s Bibles. In Jesus’ day there was no such system. To give a scripture reference in those days, teachers of the Law would cite the reference by the name of the prophet, or as part of the Law, Psalms or Prophets (e.g., Luke 24:44), or by citing the passage intended, or by summarizing the incident involved. In Mark 10:9, as any good teacher of God’s Word does, Jesus is giving His scripture reference: Genesis, The Beginning of Creation. This was especially forceful for Jesus’ argument because He thus showed that from the very first marriage (“Remember, it’s back there in Genesis in the story about the beginning of creation.”) … from the very first marriage, God’s will has been the same: one man with one woman for life. It’s a far stretch to make Mark 10:6 into commentary on the span of the prior creation week. Notice that in this passage Jesus said nothing at all about the days of creation. To claim He did is a gross twisting of the scriptures to prop up a preconception being imposed upon inspired text.
Challenge 8:
If the six days of creation are spread out over vast eons as God-decreed natural processes operate, then animals were dying for a long time, in contradiction to the doctrine that all death on earth being the result of Adam’s sin, as taught by the Apostle Paul in Romans 5-6 and 1 Cor. 15.
Answer:
That doctrine is flawed. The scriptures do not anywhere teach that no animals died before Adam and Eve sinned. No animal death before Genesis 3 is an inferred doctrine that is not so stated in scripture. Romans 5-6 (and 1 Cor. 15:22) concerns only the death of
man
as a result of
man’s
sin. It argues a contrast between Adam and Christ. By Adam sin entered the world and death by sin. In contrast, Christ brought salvation from sin to
mankind
. If the death that came by Adam’s sin encompassed the death of animals, then the salvation that comes by Christ also “saves” the animals from their sin. This is untenable. This doctrine of no-animal-death-before-Adam’s-sin was not actually taught by Paul. He was only speaking of human sin, human death and human salvation. Notice that Romans 5:12 is explicit in this regard. “Therefore, just as through one man sin entered into the world, and death through sin, and so death spread to all
men
, because all sinned.” All
men
, not all living creatures.
However, advocates of the no-animal-death-before-sin doctrine point back to Genesis 1:31. It states that God viewed the whole creation in its completion and pronounced it “very good.” The self-serving claim is made that if animals had been dying it wouldn’t have been possible for God to say it was a very good creation. This is pure speculation. And it is circular speculation at that. It presumes what it argues: that “very good” equals “no animal death.” If God designed a world to perfectly accomplish His purposes (and He did), and in that world which perfectly accomplishes His purposes, animals die according to His perfect design for their role in creation, then who is so presumptuous to say God cannot pronounce His perfectly suited creation “very good?”
^
Answer:
Many young earth proponents say if there was
any
death, or even any pain, it wasn’t “very good,” it wasn’t a perfect creation. Often times such believers have told me their belief in no-animal-death-before sin is because of their belief in certain doctrines concerning original sin. Many believers extend the idea of death as a consequence of original sin, to include all death of all living things as a consequence of Adam’s original sin “infecting” all mankind and all creation with sin and death. It is absolutely true that sin entered the world of men with Adam and Eve’s first sin, but the Bible does not teach that all men are sinners because of
inheriting
Adam’s sin. This idea of an inherited
guilt
of Adam’s original sin is directly refuted by the prophet Ezekiel in 18:14ff. Rather the biblical idea is that the world into which each person is born is since then a world in which sin dominates the lives of humans, and it has been so since the day of Adam and Eve, so even our loving parents are sinners just as were Cain and Abel’s parents. On that day the “world” was 100% infected with sin. Every living soul (Adam and Eve) were sinners. And it has remained so ever since. Thus we are born into sin, as David noted in Psalms 51: 5. David wasn’t a sinner himself as a newborn, although Psalms 58:3 indicates that sin comes very early into the lives of some. (Clearly newborns are not liars – they can’t even talk yet. This is an example of poetic hyperbole in Psalm 58.) David fell to the temptations of the world into which he was born and thus he sinned, becoming a sinner along with the rest of us. Satan roams the earth seeking to devour whomever he can, just like he pursued Job – with the Devil’s temptations giving way for our sins. It makes for a world of corruption, temptation, evil, suffering and wrath that appeals to the base aspects of our very being (Eph. 2:1-3). Result: every one of us has sinned just as Adam and Eve did (Romans 3:21). Thus, through one man sin entered the world. But we did not
inherit
Adam’s original sin. We are all sinners because of
our own
personal sin. Nowhere does the Bible speak of inherited sin, other than to say explicitly it
isn’t
inherited from father to son. So there is no reason to extend the consequences of Adam’s original sin to the animal and plant world. Plants and animals die because the “very good” perfect design created by God for plants and animals includes their physical demise, not because Adam sinned.
Were it not for the tree of life in the garden, it seems the same natural end would have prevailed for Adam and Eve, but by the grace of God, there
was
a tree of life for them. They were protected from physical death until they were prevented from eating of that tree, as a consequence of their sin. Then, they died spiritually being separated from God; and from that day forward they were dying physically as well, Genesis 5:1, as promised by God in 2:17. Without their sin, neither form of death would have happened. The same is true for us. We die physically because that’s the way God designed our physical bodies and we do not have access to the tree of life to prevent it. We die spiritually when we sin, and thus become separated from God because we cannot stand before Him in our un-right state. And one cannot do anything to recover righteousness lost. We all do it, we all sin. All except Jesus. Once Jesus came and overcame sin, He also overcame death. Just as in Adam all died, so in Christ all are made alive. He took the sting out of death so that believers no longer fear physical death because we have been made alive, reborn in baptism, (John 3 and Romans 6) unto a new life in Christ: a spirit life, not a fleshly life. (Romans 8) Sin and Satan no longer dominate us as their slaves (Romans 6). We have been made free from sin! (Galations 4) Our hope is for a new resurrected body living in Heaven with God where the tree of life stands beside the river of life for the saints to live forever with God! (Revelation 21-22) Sorry – got to preaching there.
Advocates of no-animal-death-before-sin will often point to Paul’s statements in Romans 8:19-22 that the whole creation groans in travail to bring forth the salvation of Christ.
24
Romans 8 has absolutely nothing to do with animal death. Read it carefully. There is not one word about animals dying, pro or con. In fact the figure Paul employs is just the opposite of death: it is a figure of birth, where the birth’s labor is further illustrated by laborers in bondage being set free, just as birth sets free the newborn. Romans 8:19-22 has absolutely nothing to do with death at all, it is a passage of deliverance for all of suffering mankind, and by metonymy all of creation, being born again and set free through eternal life in Christ. Indeed, we know from other passages that this physical creation is not “redeemed” like the souls of man. Rather, it will be
destroyed
by fire so the even the elements burn up, 2 Peter 3:10ff. The saved are collected by Jesus from the earth at His return prior to that eternal destruction, 1 Thessalonians 4:15-18. And the destroyed earth is replaced in His scheme of things by “a new heaven and new earth.” 2 Peter 3:13. It is this new heaven and earth which is described in Revelation 21 and 22 as the eternal home of the faithful.
Consider Psalm 104. This psalm is easily seen to be a parallel to the creation account in Genesis, as noted by all the standard commentaries. For example, verse five reads, “He established the earth upon its foundations so that it will not totter forever.” This is clearly about when God created the creation. As part of that description of the creation designed by God in particular notice verse 21.
“The young lions roar after their
prey
, And seek their
food (meat)
from God.”
From this, there should be no question that the creation God designed included the death of animals as part of His overall design from the beginning, not as a consequence of the sin of man. While this may or may not have been the rule within the Garden of Eden, protected by the tree of life, it was how God designed the whole of nature operating outside the garden from the beginning. Eden was a limited place with boundaries set by the four rivers. Beyond that we do not have any inspired description of what the creation was like, unless it is here in the 104
th
psalm. In that creation description, animals die.
Consider two arguments that animal death was a known factor to Adam
before
they sinned.
1) Eating fruit from the tree of life in the garden of Eden is what kept Adam and Eve alive, not a complete absence of the life-death cycle. In fact, if death could not operate on earth at the beginning why did they need a tree of life to keep them alive? But if death was prevented from operating on Adam and Eve by the tree, nothing in the text indicates that the tree served any similar purpose for the animals. Therefore, it is reasonable to conclude that animals would have been dying in accord with the natural designs for their bodies and systems. This is a good thing. For example, for Adam to eat and digest an apple requires the death of billions of rapidly reproducing organisms and cells that live in the human gut to aide in digestion. All such microscopic organisms, which play a vital role throughout the entire ecosystem, must die just as rapidly as they reproduce or else before long, a matter of only a few hours actually, the whole world would be just one big bacterial colony. Death of organisms is how life is designed to operate. Human death for Adam and Eve was held off in the garden by eating from the tree of life. As part of their punishment for sin they were isolated from the tree of life. Thus, as Genesis 5 so starkly tells us, “he died.” Ever since, man has not been able to eat from the tree of life. However, one day we will find that tree again beside the river of life in Heaven, Revelation 22. Then we will enjoy the blessings of life eternal with God.
2) In Genesis 2:17 God warns Adam not to eat of the tree of knowledge of good and evil saying, “In the day you eat of it, you will surely die.” This implies that Adam already had a concept of death. Today we claim this passage meant his spiritual death rather than physical death. This is claimed primarily because it is rather obvious from the story itself that Adam lived for hundreds of years after the day he first sinned. We correctly note from other passages that spiritual death is a separation from God. Since on the day of their sin they were cast out of the garden and thus separated from God, we conclude that is the type of death meant in God’s warning in 2:17. Well that sounds like a good argument, but ask yourself this. Is that spiritual meaning, “the plain ordinary meaning which Adam would have gotten from the warning?” Or, would Adam more likely have understood it to mean just plain ordinary physical death? I argue that Adam would have been far more likely to have understood this warning only in the sense of physical death of the type he witnessed occurring with the animals, which weren’t protected by the eating from the tree of life. But if that is the case, what about the promise to die the very day they ate from the forbidden tree? I would argue that indeed they did
begin
to die physically that very day. They certainly “died” spiritually as well, but that death hopefully was not permanent. Physical death was certain from that day forward for Adam and Eve. Why? Because God said so.
If God speaks (2:17), it happens (5:5)!
Download
166.08 Kb.
Page
2/4
Date conversion
01.09.2011
Size
166.08 | http://docs.exdat.com/docs/index-99618.html?page=2 | 2018-07-15T22:48:09 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.exdat.com |
The first delivery of the use case includes the reference architecture and the design and design decisions for the micro-segmentation platform. This includes information about product versions. Scale validation includes an environment with 100 hosts and 3 000 virtual machines.
In the core micro-segmentation use case, logical networking is at the center of the design. The use case validates creation of security rules that protect virtual machines by using NSX distributed firewalls. The user performs configuration by using the vSphere Web Client.
In the future, the use case will include validation and best practices for Service Composer groups and policies. This use case includes service integration and chaining of security services provided by NSX for vSphere with partner services. | https://docs.vmware.com/en/VMware-Validated-Design/4.1/com.vmware.vvd.usecases-introduction.doc/GUID-05FF0C77-0D03-4C24-8813-2B4C708A443D.html | 2018-07-15T23:36:35 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.vmware.com |
1 Introduction
By default, a Mendix app is blocked from running inside an iframe. This is to protect the end-user from attacks using clickjacking. There is more information on this in the Adding HTTP Headers section of How To Implement Best Practices for App Security.
You can enable your app to run inside an iframe by setting the X-Frame-Options HTTP header for your node’s environment. For the Mendix Cloud, this can be done within the Mendix Developer Portal, as described in the HTTP Headers section of Environment Details.
2 Resolving Browser Issues
Most browsers have additional security to ensure that iframes are only allowed when they are from the same domain as the main page. If your app does not have the same domain as the main page containing the iframe, it will only run if the SameSite cookie is set to allow this. You can find a good explanation of SameSite cookies in SameSite cookies explained on the web.dev website.
When running your app in the Mendix Cloud, you can set the SameSite cookie through a custom runtime setting as explained in the Running Your App in an Iframe section of Environment Details.
If your app is deployed outside the Mendix Cloud (on premises, for example), then you will need to configure your webserver to set the SameSite cookie to the correct value. | https://docs.mendix.com/developerportal/deploy/running-in-iframe | 2022-01-16T21:53:56 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.mendix.com |
Introduction to nilvana™ flow ─ Main nodes
In this article we are going to cover the following topics:
Visual Learners! Here is the tech demo
Introduction
There are 8 nodes under nilvana's categorization; camera configuration, face recognition, image preview, blacklists, white or black list, whitelists, facial web hooks and gatekeeper, respectively. There are dependencies linked between the nodes, therefore, please follow the steps accordingly.
In this article, we are going to introduce camera configuration, face recognition and image preview.
▽ 8 nodes under nilvana's categorization
▽ Hover your mouse on the node, there will be a dialog box with brief introduction
▽ Click on the book button, the details of node display on the right
▽ Drag the node into the flow
▽ Update settings by double clicking on the node
Camera configuration
This node is developed for controlling your camera, make sure that your USB camera is connected to your atom device before turning the power on. The first thing you need to do after adding this node to your flow is assign an MQTT broker address. This broker is already installed on your edge device, therefore you can simply input localhost to the Server field. You can either control the frame rate by adjusting the Interval field manually or by checking the face detection box for auto-detection. Once the face detection function is enabled, this node will only pass image data to the downstream node when it detects a face inside the camera. Don't forget to deploy your modifications and toggle the node.
▽ Add the camera configuration node to flow
▽ Set the MQTT broker
▽ The MQTT broker has been installed, easily fill with localhost
▽ Press the Done button to finish configuration
▽ Don't forget to deploy your settings
▽ You will see the "Successfully deployed"
Face recognition
Once you've set up the camera configuration node, you can add this node to the flow to recognize known faces. Thanks to the face enrollment kits running on the workstation, you would only need to input the workstation IP address.
▽ Add the face recognition node to the flow
▽ Fill with the IP address of Workstation
Image preview
After finishing face recognition settings, you can obtain face information when the system recognizes a known face. By adding the image preview into the flow, you can see the recognized face position and name under the node. Modify the parameters in the width column if you want to adjust the display size of the preview frame.
▽ Add the image preview node to flow
▽ Add the line to connect face recognition and image preview
▽ Don't forget to deploy the modifications
▽ Click on the box in front of the camera configuration to activate the camera
▽ Now, you can see the image preview of the recognition
| https://docs.nilvana.ai/article/45-intro-nilvana-flow | 2022-01-16T22:18:55 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf50990b11ce44f6393f2b/file-kHPDNdJqtZ.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf50d327288b7f895d72b8/file-vxpXpO4jS1.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fe0113127288b7f895d73ce/file-ZQDBEwUccv.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf5fc127288b7f895d72cc/file-55G72NmzdH.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf5ff6a5d295659b36a959/file-TmuBfU9zxa.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf5fc127288b7f895d72cc/file-55G72NmzdH.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf5ff6a5d295659b36a959/file-TmuBfU9zxa.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf614b7129911ba1b22ba4/file-hAUwQb1bYQ.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf618a7129911ba1b22ba5/file-474jqwPHpB.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fe1a1f627288b7f895d79e6/file-qgC4Cr5UCr.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf61dbb624c71b7985b342/file-5U6yzlfBp5.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf629b7129911ba1b22baa/file-LRBnAuT1Ie.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf62c8b624c71b7985b344/file-qbSCMJLuRr.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf66790b11ce44f6393f50/file-nq28ZuXL0Z.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf66970b11ce44f6393f51/file-zyNTOHsG07.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf66beb624c71b7985b34d/file-tC9VylS6lN.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf66eb7129911ba1b22bb3/file-JozVvUpLTx.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5fa11e79cff47e00160b60fa/images/5fdf670827288b7f895d72d3/file-HGnzPVcUwJ.png',
None], dtype=object) ] | docs.nilvana.ai |
3 Gapminder
All data visualization starts with data to visualize, and we begin with excerpts of data from Gapminder: more specifically, we begin with a CSV dump of the data in the Gapminder library for R. This data is already tidy and in the format we want, so we merely read it in as a CSV using df-read/csv from the data-frame library:
Let’s break down this code. The main form is graph, which takes a number of keyword arguments. The #:data keyword argument specifies the data-frame that we want to plot.
The #:mapping keyword argument specifies our aes (standing for aesthetics), which dictates how we actually want the data to be shown on the plot. In this case, our mapping states that we want to map the x-axis to the variable gdpPercap, and the y-axis to the variable lifeExp.
Finally, the rest of our arguments dictate our renderers. In this case, the points renderer states that we want each data point to be drawn as a single point.
The #:x-transform keyword argument specifies a transform?, which combines a plot transform and ticks. In this case, we use the logarithmic-transform function, which is already defined.
All we’ve done here is added labels and titles via their eponymous keyword arguments, and added a keyword to the renderer points.
Note, crucially, that fit takes into account our transform: despite the fit looking linear here, it is actually a logarithmic fit, since it fits on the transformed data.
Now we’re seeing some notable differences from where we’ve started! We made a scatter plot, transformed its axes, labeled it, and added aesthetics to make it more readable. | https://docs.racket-lang.org/graphite-tutorial/Gapminder.html | 2022-01-16T22:53:37 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.racket-lang.org |
各 LOD の通常のマッピングを有効にするブール値の配列を取得して設定します。
Because normal mapping comes with an increased performance cost, you may want to only render normal maps on the SpeedTree assets that are nearest to the player. You can use this feature to improve performance by disabling normal mapping normal mapping on the first two LOD levels, but disable it on the third. | https://docs.unity3d.com/ja/2020.2/ScriptReference/SpeedTreeImporter-enableBump.html | 2022-01-16T22:26:29 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.unity3d.com |
Detecting and analyzing faces
Amazon Rekognition can detect faces in images and videos. This section covers non-storage operations for analyzing.
When you provide an image that contains a face, Amazon Rekognition detects the face in the image, analyzes the facial attributes of the face, and then returns a percent confidence score for the face and the facial attributes that are detected in the image.
This section provides examples for both image and video facial analysis. For more information about using the Amazon Rekognition API, see Working with images and Working with stored videos.
The face detection models used by Amazon Rekognition Image and Amazon Rekognition Video don't support the detection of faces in cartoon/animated characters or non-human entities. If you want to detect cartoon characters in images or videos, we recommend using Amazon Rekognition Custom Labels. For more information, see the Amazon Rekognition Custom Labels Developer Guide.
You can use storage operations to save facial metadata for faces detected in an image. Later you can search for stored faces in both images and videos. For example, this enables searching for a specific person in a video. For more information, see Searching faces in a collection.
Topics | https://docs.aws.amazon.com/rekognition/latest/dg/faces.html?pg=ln&sec=ft | 2022-01-16T23:47:11 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['images/sample-detect-faces.png', None], dtype=object)] | docs.aws.amazon.com |
Synology NAS
This document provides the steps required to configure the Synology Inspector
Quick Details
Recommended Agent: On-Premises
Supported Agents: On-Premises or Self-Hosted
Is Auto-Discovered By: Network Discovery Inspector
Can Auto-Discover: N/A
Parent/Child Type Inspector: No
Inspection via: API
Data Summary: Here
Overview
Liongard's Synology Inspector pulls various details about a Synology System, including storage information, network information, system health, and NAS share information.
See it in Action
Video isn't playing? Click here.
Inspector Setup Preparation
Firmware Versions
The Synology NAS Inspector was developed using firmware version 6.2.x. If you deploy the Inspector on an older version and it fails, updating to 6.2.x will likely resolve the issue. This Inspector does not currently work with firmware version 7.x or higher.
User Account Permissions
Liongard's Synoloy NAS Inspector requires Admin Permissions so it can return the richest data. Liongard's Synology NAS Inspector is only going to return data based on the level of permissions granted.
Additionally, Synology does not offer a way currently to programmatically access their API with a user that has MFA enabled. Thus, the Inspector will not properly authenticate with a user that has MFA enabled. A very strong password is recommended for this user instead.
Existing Admin User
If you already have an existing Admin user you would like to use, proceed to Steps 5 and 6 in this section to ensure the Admin user has proper share access to obtain File Station information.
Create an Admin User
- Log in to your Synology Web Interface
- Open the Control Panel from the Home screen
- Access the Users section of the Control Panel
- From here, create a new Admin user for Inspector setup or you use an existing Admin user account.
![Screen Shot 2020-02-06 at 4.29.35 PM.png a. If you are creating a new user, use the create button.
b. If you are using an existing user select Edit over the user you want to use.]()
![Click to close... a. If you are creating a new user, use the create button.
b. If you are using an existing user select Edit over the user you want to use.]()
a. If you are creating a new user, use the create button.
b. If you are using an existing user select Edit over the user you want to use.
- Assign the User Read/Write access to all shares
Placing the user in the system default administration group should accomplish this.
- Assign the User to the Administration Group. All other user setup options can be left at default or changed.
- In the DSM Settings, we recommend automatically redirecting HTTP connection to HTTPS for DSM desktop.
Liongard Inspector Setup
Incorrect Credentials
If incorrect credentials are input into the Inspector configuration in Liongard, the Inspector will make multiple attempts to authenticate which will result in an account lockout.
If the credentials are incorrect, the Inspector will return an authorization error.
Individual Inspector Setup
In Liongard, navigate to Admin > Inspectors > Navigate to the Synology NAS Inspector > Select Add System.
Fill in the following information:
- Environment: Select the Environment this System should be associated to
- Friendly Name: Suggested "Synology NAS [Environment Name]"
- Agent: Select the On-Premises Agent installed for this Environment
- Inspector Version: Latest
- IP Hostname: The IP address of the Synology Device
- HTTPS Port: The default Port of 5001 will be used if none is set
- Admin Username: Username for the Admin user account
- Admin Password: Password for the Admin user
- Synology NAS Inspectors via CSV Import, navigate to Admin > Inspectors > Synology NAS > "synology-nas-inspector"
- Environment.Name: This column is case sensitive. Copy and paste the associated Environment name from the Dashboard screen
- Alias: Enter the Desired Friendly Name
- Config.HOSTNAME: Enter the internal IP address or the fully qualified domain name for the Synology Device.
- If you are using a Cloud Agent, the public IP address or name that will allow access to the API port is appropriate
- Config.PORT: The default HTTPS port will is "5001", but if you changed the port when setting up access in Synology, enter that port number here
- Config.USERNAME: Enter a username that is in the Administrators group on the Synology Device
- SecureConfig.PASSWORD: Enter the password for the above User
- FreqType: Enter "days"
- FreqInterval: Enter "1"
When ready to Import the CSV Template of Inspectors, navigate to Admin > Inspectors > Synology NAS > Select the up arrow icon in the top right-hand to Import CSV > Select your saved template.
After the successful import notification, reload your browser to find your imported Inspectors.
These Inspectors will automatically trigger themselves to run within a minute.
Activating Auto-Discovered Inspectors
If you have set up a Network Discovery Inspector, it can auto-discover your Synology NAS Inspectors. After completing the setup preparation above, follow the steps below:
Navigate to Admin > Inspectors > Select Synology NAS Inspector > Select the Discovered Systems tab
Here you can Activate your Discovered Synology NAS Inspector(s):
- Individually select the three dots Action menu to the left of the Discovered Synology NAS Inspector(s)
- Edit the Synology NAS Inspector(s) to include the following credentials gathered in the Inspector Setup Preparation
- Admin Username: Username for the Admin user account
- Admin Password: Password for the Admin user
- Save the Inspector(s)
- Select the checkbox to the left of the Inspector(s) that you would like to Activate
- Select the Actions drop-down menu above the Discovered Systems table
- Select Activate Launchpoints
Serverless Environment
We recommend deploying the Synology NAS Inspector using an On-Premises Agent. However, if a client network is serverless, you can deploy and whitelist a Self-Hosted Agent and use that Agent to run the Inspector. Please review this documentation for more information.
This Inspector runs on Port 5001.
Synology NAS Quick Tips/FAQs
Updated 5 days ago | https://docs.liongard.com/docs/synology-nas-inspector | 2022-01-16T22:38:19 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['https://play.vidyard.com/6cqh557rhQ56goPcstXReH.jpg', None],
dtype=object)
array(['https://files.readme.io/c20bf04-Screen_Shot_2020-02-05_at_3.34.49_PM.png',
'Screen Shot 2020-02-05 at 3.34.49 PM.png'], dtype=object)
array(['https://files.readme.io/c20bf04-Screen_Shot_2020-02-05_at_3.34.49_PM.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/4e02102-Screen_Shot_2020-02-05_at_3.35.16_PM.png',
'Screen Shot 2020-02-05 at 3.35.16 PM.png'], dtype=object)
array(['https://files.readme.io/4e02102-Screen_Shot_2020-02-05_at_3.35.16_PM.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/95c4334-Screen_Shot_2020-03-02_at_2.55.13_PM.png',
'Screen Shot 2020-03-02 at 2.55.13 PM.png Placing the user in the system default administration group should accomplish this.'],
dtype=object)
array(['https://files.readme.io/95c4334-Screen_Shot_2020-03-02_at_2.55.13_PM.png',
'Click to close... Placing the user in the system default administration group should accomplish this.'],
dtype=object)
array(['https://files.readme.io/6218b71-Screen_Shot_2020-03-02_at_2.57.56_PM.png',
'Screen Shot 2020-03-02 at 2.57.56 PM.png'], dtype=object)
array(['https://files.readme.io/6218b71-Screen_Shot_2020-03-02_at_2.57.56_PM.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/bdcb932-7def3da-Synology_Doc_Suggestion.png',
'7def3da-Synology_Doc_Suggestion.png'], dtype=object)
array(['https://files.readme.io/bdcb932-7def3da-Synology_Doc_Suggestion.png',
'Click to close...'], dtype=object) ] | docs.liongard.com |
.
Simple.
Method for specifying custom search properties has changed
Valid from Pega Version Pega Platform
The method for specifying a custom set of properties that are stored in the Elasticsearch indexes and that can be referenced later in filter conditions or returned in a set or results has changed. Previously, the pySearchModel Data Transform rule was used to specify a custom search property list within a class. The new method is to specify a Data instance of the Data-CustomProperties-Search class.
After upgrading to Pega 7.2, make these changes:
- Reenter existing pySearchModel Data Transform rules as instances of Data-CustomProperties-Search data instances.
- Enable the
indexing/useDataInstancesData Admin system setting by clicking .
After you change the set of custom search properties for a class, rebuild the Elasticsearch index for that class on the System Settings Search landing page.. | https://docs.pega.com/platform/release-notes-archive?f%5B0%5D=%3A9031&f%5B1%5D=%3A31541&f%5B2%5D=releases_capability%3A9031&f%5B3%5D=releases_capability%3A9036&f%5B4%5D=releases_capability%3A9081&f%5B5%5D=releases_capability%3A28506&f%5B6%5D=releases_note_type%3A986 | 2022-01-16T21:52:00 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.pega.com |
1.13 Programs and Modules
When you write a program using #lang plait, you are technically defining a module. A Plait module contains a mixture of expressions and definitions. The expressions are evaluated in order, and the value of each expression is printed after the expression is evaluated (unless the result value has type Void). The order of function definitions doesn’t matter, as long as a function definition appears before any expression that eventually calls the function.
Note the use of #; in the example above. A #; comments out the entire form that follows it, which is handy for commenting out a definition of expression, even when the definition or expression spans multiple lines.
Modules written with the module form can be nested in other modules. A nested module is called a submodule. Plait programs don’t often use submodules that are written with module, but the module+ form is more common. A module+ form creates a submodule by merging all module+s that use the same name. A typical use of module+ is to move all of a program’s tests into a test submodule.
The submodule name test is special, because DrRacket automatically runs a test submodule (if one is present) after running the enclosing module. In the above example, since the test submodule is run after the encloding module that defines is-odd? and is-even?, the tests can use all of the functions. Another advantage of putting tests in a test submodule is that you can turn off the tests. In DrRacket’s Language menu, select Choose Language, click Show Details, click Submodules to run, and then uncheck the test item.
A Plait module’s definitions are automatically exported from the module. You can import the definitions of another module by using the require form, typically with a string that is a relative path to the module to import.
"math.rkt"
"circle.rkt"
A submodule created by module+ automatically imports the bindings of the enclosing module, which is why (module+ test ....) submodules can automatically access definitions for testing. In contrast, if you write definitions inside (module+ test ....), then the definitions can be used for tests in any (module+ test ....), but the enclosing module will not see the definitions. | https://docs.racket-lang.org/plait/program-tutorial.html | 2022-01-16T23:03:41 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.racket-lang.org |
Mutator SET clauses provide a syntax for updating structured type columns. A mutator SET clause can only be used to update structured UDT columns (the specified column_name in a mutator SET clause must identify a structured UDT column). Each mutator method name you specify must be a valid mutator method name for the respective structured type value.
A mutator method name is the same name as the attribute name that it modifies. Within the mutated set clause, parentheses following the attribute name are not valid.
There is one additional restriction on mutator SET clauses.
Consider the following example:
SET mycol.R = x, mycol.y = mycol.R() + 3
As implemented by Vantage, any column references in an expression refer to the value of the column in the row before the row is updated. The system converts the two example clauses to the following single equality expression:
mycol = mycol.R(x).y(mycol.R() + 3)
This is a deviation from the ANSI SQL:2011 standard.
According to the ANSI SQL:2011 standard, the column reference to mycol in the second example equality expression of the mutator SET clause should reflect the change made to it from the first equality expression of the mutator SET clause, the assignment of x.
The two equality expressions are converted to the following single equality expression:
mycol = mycol.R(x).y(mycol.R(x).R() + 3) | https://docs.teradata.com/r/FaWs8mY5hzBqFVoCapztZg/rtwTJU5ON_phYTYdZAD0KQ | 2022-01-16T21:48:40 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.teradata.com |
Before you can begin using the Management Pack, you must create an adapter instance to identify the host from which the Management Pack will retrieve data.
Prerequisites
Procedure
- From the top navigation bar, select Administration. In the right panel, the Solutions view will be displayed.
- Select NetApp OCUM from the Solutions list on the right.
- Click the Configure
icon. The Manage Solution window will appear.Note: Click the Add
icon above the Instance Name list on the left to create multiple adapter instances.
- In the Manage Solution window, enter the following information:
- Instance Settings:
- Display Name: A name for this particular instance of the Management Pack.
- Description: Optional, but it can be helpful to describe multiple instances of the Management Pack.
- Basic Settings:
- Host: The host name or IP address.
- Credential: Select the credential you created in Creating a Credential (NetApp FAS/AFF).
- Advanced Settings:
- Port: The default port used by the management pack is 443.
- SSL Config: The SSL mode to use when connecting to the target. Can be configured to use SSL but do not verify the target's certificate (No Verify) or use SSL and verify the target's certificate (Verify).
- Request Timeout: The number of seconds to allow for the API to return a response.
- Max Concurrent Requests: The maximum number fo requests to allow simultaneously.
- Event Severity: The maximum event severity to collect.
- Collect [resource]: Each of these options toggle collection of the specified resource type.
- Click Test Connection to ensure vROps can connect properly to the system.
- Click Save Settings to save your adapter instance configuration. | https://docs.vmware.com/en/VMware-vRealize-True-Visibility-Suite/1.0/netapp-fas-aff/GUID-92225399-82A5-450D-906D-47ECB8DFCACD.html | 2022-01-16T23:21:15 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.vmware.com |
AudioStreamGenerator¶
Inherits: AudioStream < Resource < Reference < Object
Audio stream that generates sounds procedurally.
Description¶
This audio stream does not play back sounds, but expects a script to generate audio data for it instead. See also AudioStreamGeneratorPlayback.
See also AudioEffectSpectrumAnalyzer for performing real-time audio spectrum analysis.
Note: Due to performance constraints, this class is best used from C# or from a compiled language via GDNative. If you still want to use this class from GDScript, consider using a lower mix_rate such as 11,025 Hz or 22,050 Hz.
Tutorials¶
Property Descriptions¶
The length of the buffer to generate (in seconds). Lower values result in less latency, but require the script to generate audio data faster, resulting in increased CPU usage and more risk for audio cracking if the CPU can't keep up.
The sample rate to use (in Hz). Higher values are more demanding for the CPU to generate, but result in better quality.
In games, common sample rates in use are
11025,
16000,
22050,
32000,
44100, and
48000.
According to the Nyquist-Shannon sampling theorem, there is no quality difference to human hearing when going past 40,000 Hz (since most humans can only hear up to ~20,000 Hz, often less). If you are generating lower-pitched sounds such as voices, lower sample rates such as
32000 or
22050 may be usable with no loss in quality. | https://godot-ko.readthedocs.io/ko/latest/classes/class_audiostreamgenerator.html | 2022-01-16T22:47:18 | CC-MAIN-2022-05 | 1642320300244.42 | [] | godot-ko.readthedocs.io |
If you'd like to see which device your employee used to punch in or out, you can do this by visiting the employee's time card.
Click "Timecards" in the top navigation followed by "View All."
Click "View" next to an employee's name.
Once on the employees time card, you will find a column referred to as "Device Used." Every time an employee punches in or out, our system will notate which device they have used.
The view the full article outside of the chat window, please click here. | https://docs.buddypunch.com/en/articles/3558403-where-can-i-view-which-device-was-to-punch-in-or-out | 2022-01-16T23:06:20 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['https://downloads.intercomcdn.com/i/o/169219647/93260f11fe05f1609110d5be/Screen+Shot+2019-12-09+at+12.10.20+PM.png',
None], dtype=object) ] | docs.buddypunch.com |
This page displays the workflow log. Notice the name after the page title, "template-1257" — this is the name of the workflow, which includes the workflow name plus the CloudBees Flow auto-generated ID number, and it is the object of the workflow log in our example.
Links and actions above the table
Drop-down menu—Use the down-arrow to select Error, Warn, or Info. The first time you choose a severity level, it will become your default level—the one you always see first when you view this page. Selecting another level changes your default view.
The "star" icon allows you to save this workflow definition to your Home page.
The "bread crumbs" Project: SoftwareBuild / Workflow: template-1257 provide links to a previous web page.
Column descriptions
Time —Displays the time when an event occurred that caused the server to generate a message.
Severity —The three severity levels are:
ERROR—An unexpected failure was encountered while entering a state or launching a sub-action. Generally, this error indicates a critical problem with the workflow that requires fixing the workflow definition.
WARN—A non-critical issue was encountered while the workflow was running.
INFO—Provides workflow activity information including the state entered, transitions taken, and so on.
User —The name of the user or project principal that explicitly launched the job. This property is blank when the job is launched by a schedule.
Subject —Objects displayed in this column are the subject of the message. These objects are linked to either the Workflow Details or the State Details page.
Message —A text message generated by the CloudBees Flow server while the workflow was running. | https://docs.cloudbees.com/docs/cloudbees-cd/9.2/automation-platform/help-workflowlog | 2022-01-16T21:55:53 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['../_images/user-guide/workflowlog.7fda5bc.jpg', None],
dtype=object) ] | docs.cloudbees.com |
Db
Provider Manifest. Store Schema Definition Field
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Value to pass to GetInformation to get the StoreSchemaDefinitionVersion.
public: static initonly System::String ^ StoreSchemaDefinition;
public static readonly string StoreSchemaDefinition;
staticval mutable StoreSchemaDefinition : string
Public Shared ReadOnly StoreSchemaDefinition As String | https://docs.microsoft.com/en-us/dotnet/api/system.data.common.dbprovidermanifest.storeschemadefinition?view=netframework-4.6.1 | 2022-01-16T23:59:55 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.microsoft.com |
To start using CTP, browse to your organization’s CTP URL, then enter your username and password.
If your CTP license has not yet been set, enter it in the license page that opens.
Tip—Installing new license keys to License Server
If you need to add new license keys and your License Server is on the same host as CTP, you can do it directly from CTP. Just choose Administration> License Configuration, then copy your license key (provided by your Parasoft representative) into the Add New License area and click the Add License button. Once the key is processed, it will be added to the list in the Installed Licenses area.
If you don’t see the Add New License area, be sure that you have the network license’s Host namefield set to localhost. | https://docs.parasoft.com/display/SOAVIRT9107CTP313/Logging+In | 2022-01-16T22:44:13 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.parasoft.com |
Crate kvdb_web
Version 0.9.0
See all kvdb_web's items
A key-value database for use in browsers
Writes data both into memory and IndexedDB, reads the whole database in memory
from the IndexedDB on open.
open
Database backed by both IndexedDB and in memory implementation.
An error that occurred when working with IndexedDB.
Generic key-value database. | https://docs.rs/kvdb-web/latest/kvdb_web/ | 2022-01-16T21:21:54 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.rs |
Search¶
The free text search box is at the top of the page. You can use simple keywords or, as shown in the image, special characters for Lucene advanced queries.
You have also the option to use filters. On the left side of the page, you can see all the available filters.
The filters’ value lists are closed, but once you click on the arrow next to the filters they open. For filters with a long list of values, a free text search facility is available once you open the filter.
If you wish to remove the filter(s) applied, either click on the Clear all filters button or on the x next to the filter name above the search results.
The following filters are available:
Resource types: It groups the LRTs by their type, i.e. corpora, tools/services, lexical/conceptual resources, models, grammars.
Service functions: It groups services according to the function they perform (e.g. Machine Translation).
Intended LT applications: It groups the LRTs by the LT application for which they can be used (e.g. datasets created for Text Categorization).
Languages: It groups the LRTs depending on the language(s) of the contents, or, in the case of tools and services, the language(s) they can process. To facilitate search, languages are grouped into three groups: Official EU languages, Other EU/European languages and Other languages 1 .
Media types: It groups the data resources depending on their media type, i.e. text, video, audio, image, numerical text.
Licences: It groups LRTs according to their licence(s) (for instance, resources under CC-BY 4.0 licence).
Condition of use for data: It groups data resources according to the conditions of use of their licence(s) or the access rights indicated by the creator of the metadata record. The conditions of use for standard licences have been added by the ELG legal team; for proprietary licences, we rely on the providers’ information. Only a subset of conditions of use deemed of assistance for search purposes is included in the facet. Please read the licence to ensure that you can use the data for your purposes.
Note
For a resource to be filtered, and presented as a result to the user, the respective metadata field must have been filled in by the creator of the metadata record. | https://european-language-grid.readthedocs.io/en/stable/all/2_Using/Search.html | 2022-01-16T22:07:00 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['../../_images/search.png', 'Search box'], dtype=object)
array(['../../_images/filters.png', 'Filters'], dtype=object)
array(['../../_images/filtersOpen.png', 'Open filters'], dtype=object)
array(['../../_images/removeFilters.png', 'Remove filters'], dtype=object)] | european-language-grid.readthedocs.io |
API Reference
This page provides an auto-generated summary of
xskillscore’s API.
For more details and examples, refer to the relevant chapters in the main part of the
documentation.
Deterministic Metrics
Correlation Metrics
Distance Metrics
Probabilistic Metrics
Currently, most of our probabilistic metrics are ported over from
properscoring to work with
xarray DataArrays and Datasets.
Contingency-based Metrics
These metrics rely upon the construction of a
Contingency object. The user calls the
individual methods to access metrics based on the table.
Contingency table
Dichotomous-Only (yes/no) Metrics
Multi-Category Metrics
Comparative
Tests to compare whether one forecast is significantly better than another one.
Resampling
Functions for resampling from a dataset with or without replacement that create a new
iteration dimension. | https://xskillscore.readthedocs.io/en/latest/api.html | 2022-01-16T21:40:13 | CC-MAIN-2022-05 | 1642320300244.42 | [] | xskillscore.readthedocs.io |
Note: 4.1.x and earlier releases are End of Life and no longer supported. See EOL Statements.
Backport of CVE-2020-25829: Cache pollution.¶
References: pull request 9601
Backport of CVE-2020-14196: Enforce webserver ACL.¶
References: pull request 9283
Fix compilation on systems that do not define HOST_NAME_MAX.¶
References: #8640, #9127, pull request 9129
Only log qname parsing errors when ‘log-common-errors’ is set.¶
References: pull request 8868
Backport of security fixes for CVE-2020-10995, CVE-2020-12244 and CVE-2020-10030, plus avoid a crash when loading an invalid RPZ.¶
References: pull request 9117
Update python dependencies for docs generation.¶
References: pull request 8809
References: pull request 8753
Backport 8525 to rec 4.1.x: Purge map of failed auths periodically by keeping a last changed timestamp¶
References: pull request 8554
Backport 8470 to rec 4.1.x: prime NS records of root-servers.net parent (.net)¶
References: pull request 8544
Backport 8340 to rec 4.1.x: issue with “zz” abbreviation for IPv6 RPZ triggers¶
References: pull request 8543
Backport 7068 to 4.1.x: Do the edns data dump for all threads¶
References: pull request 8542
Backport #7951 to 4.1.x: update boost.m4¶
References: pull request 8123
Add statistics counters for AD and CD queries.¶
References: pull request 7906
Add missing getregisteredname Lua function¶
References: pull request 7912
Add the
disable-real-memory-usage setting to skip expensive
collection of detailed memory usage info.¶
References: #7661, pull request 7673
Fix DNSSEC validation of wildcards expanded onto themselves.¶
References: #7714, pull request 7816
Provide CPU usage statistics per thread (worker & distributor).¶
References: pull request 7647
Use a bounded load-balancing algo to distribute queries.¶
References: #7507, pull request 7634
Implement a configurable ECS cache limit so responses with an ECS scope more specific than a certain threshold and a TTL smaller than a specific threshold are not inserted into the records cache at all.¶
References: #7572, #7631, pull request 7651
Correctly interpret an empty AXFR response to an IXFR query.¶
References: #7494, pull request 7495
Since Spectre/Meltdown, system calls have become more expensive. This made exporting a very high number of protobuf messages costly, which is addressed in this release by reducing the number of syscalls per message.
Add an option to export only responses over protobuf to the Lua
protobufServer() directive.¶
References: pull request 7434
Reduce systemcall usage in protobuf logging. (See #7428.)¶
References: #7428, pull request 7430
This release fixes a bug when trying to build PowerDNS Recursor with protobuf support disabled, thus this release is only relevant to people building PowerDNS Recursor from source and not if you’re installing it as a package from our repositories.
PowerDNS Recursor release 4.1.9 introduced a call to the Lua
ipfilter() hook that required access to the DNS header, but the corresponding variable was only declared when protobuf support had been enabled.¶
References: pull request 7403
Try another worker before failing if the first pipe was full¶
References: #7383, pull request 7377
Properly apply Lua hooks to TCP queries, even with pdns-distributes-queries set (CVE-2019-3806, PowerDNS Security Advisory 2018-01). Validates records in the answer section of responses with AA=0 (CVE-2019-3807, PowerDNS Security Advisory 2019-02).¶
References: pull request 7397
This release fixes Security Advisory 2018-09 that we recently discovered, affecting PowerDNS Recursor up to and including 4.1.7.
The issue is that a remote attacker can trigger an out-of-bounds memory read via a crafted query, while computing the hash of the query for a packet cache lookup, possibly leading to a crash.
When the PowerDNS Recursor is run inside a supervisor like supervisord or systemd, a crash will lead to an automatic restart, limiting the impact to a somewhat degraded service.
Crafted query can cause a denial of service (CVE-2018-16855, PowerDNS Security Advisory 2018-09)¶
References: pull request 7221
This release updates the mitigation for Security Advisory 2018-07, reverting the EDNS fallback strictness increase. This is necessary because there are a lot of broken name servers on the Internet.
Revert ‘Keep the EDNS status of a server on FormErr with EDNS’¶
References: pull request 7172
Refuse queries for all meta-types¶
References: pull request 7174
This release reverts #6980, it could lead to DNSSEC validation issues.
Revert “rec: Authority records in AA=1 CNAME answer are authoritative”.¶
References: #7158, pull request 7159
This release fixes the following security advisories:
Add pdnslog to lua configuration scripts (Chris Hofstaedtler)¶
References: #6848, pull request 6919
Fix compilation with libressl 2.7.0+¶
References: #6943, pull request 6948
Export outgoing ECS value and server ID in protobuf (if any)¶
References: #6989, #6991, pull request 7004
Switch to devtoolset 7 for el6¶
References: #7040, pull request 7122
Allow the signature inception to be off by a number of seconds. (Kees Monshouwer)¶
References: #7081, pull request 7125
Delay the creation of rpz threads until we have dropped privileges¶
References: #6792, pull request 6984
Crafted answer can cause a denial of service (CVE-2018-10851, PowerDNS Security Advisory 2018-04)¶
References: pull request 7151
Packet cache pollution via crafted query (CVE-2018-14626, PowerDNS Security Advisory 2018-06)¶
References: pull request 7151
Crafted query for meta-types can cause a denial of service (CVE-2018-14644, PowerDNS Security Advisory 2018-07)¶
References: pull request 7151
Cleanup the netmask trees used for the ecs index on removals¶
References: #6960, pull request 6961
Make sure that the ECS scope from the auth is < to the source¶
References: #6605, pull request 6963
Authority records in aa=1 cname answer are authoritative¶
References: #6979, pull request 6980
Avoid a memory leak in catch-all exception handler¶
References: pull request 7073
Don’t require authoritative answers for forward-recurse zones¶
References: #6340, pull request 6741
Release memory in case of error in the openssl ecdsa constructor¶
References: pull request 6917
Convert a few uses to toLogString to print DNSName’s that may be empty in a safer manner¶
References: #6924, pull request 6925
Avoid a crash on DEC Alpha systems¶
References: pull request 6945
Clear all caches on (N)TA changes¶
References: #6949, pull request 6951
Split
pdns_enable_unit_tests. (Chris Hofstaedtler)¶
References: pull request 6436
Add a new max-udp-queries-per-round setting.¶
References: pull request 6518
Fix warnings reported by gcc 8.1.0.¶
References: pull request 6590
Tests: replace awk command by perl.¶
References: pull request 6809
Allow the snmp thread to retrieve statistics.¶
References: pull request 6720
Don’t account chained queries more than once.¶
References: #6462, pull request 6465
Make rec_control respect include-dir.¶
References: #6536, pull request 6557
Load lua scripts only in worker threads.¶
References: #6567, pull request 6812
Purge all auth/forward zone data including subtree. (@phonedph1)¶
References: pull request 6873
This release improves the stability and resiliency of the RPZ implementation, prevents metrics gathering from slowing down the processing of DNS queries and fixes an issue related to the cleaning of EDNS Client Subnet entries from the cache.
Move carbon/webserver/control/stats handling to a separate thread.¶
References: pull request 6567
Use a separate, non-blocking pipe to distribute queries.¶
References: pull request 6566
Add a subtree option to the API cache flush endpoint.¶
References: #6550, pull request 6562
Update copyright years to 2018 (Matt Nordhoff).¶
References: #6130, #6610, pull request 6611
Fix a warning on botan >= 2.5.0.¶
References: #6474, pull request 6478, pull request 6596
Add
_raw versions for
QName /
ComboAddresses to the
FFI API.¶
References: pull request 6583
Respect the
AXFR timeout while connecting to the
RPZ server.¶
References: pull request 6469
Don’t increase the
DNSSEC validations counters when running with
process-no-validate.¶
References: pull request 6467
Count a lookup into an internal auth zone as a cache miss.¶
References: pull request 6313
Delay the loading of
RPZ zones until the parsing is done, fixing a race condition.¶
References: #6237, pull request 6588
Reorder includes to avoid boost
L conflict.¶
References: #6358, #6516, #6517, #6542, pull request 6595
¶¶Use canonical ordering in the
ECSindex.
References: #6505, pull request 6586
Add
-rdynamic to
C{,XX}FLAGS when we build with
LuaJIT.¶
References: pull request 6514, pull request 6630
Increase
MTasker stacksize to avoid crash in exception unwinding (Chris Hofstaedtler).¶
References: #6179, pull request 6418
Use the SyncRes time in our unit tests when checking cache validity (Chris Hofstaedtler).¶
References: #6086, pull request 6419
Disable only our own tcp listening socket when reuseport is enabled¶
References: #6849, pull request 6850
This release improves the stability and resiliency of the RPZ implementation and fixes several issues related to EDNS Client Subnet.
Add FFI version of
gettag().¶
References: pull request 6344
Add the option to set the AXFR timeout for RPZs.¶
References: pull request 6268, pull request 6290, pull request 6298, pull request 6303
IXFR: correct behavior of dealing with DNS Name with multiple records and speed up IXFR transaction (Leon Xu).¶
References: pull request 6172
Add RPZ statistics endpoint to the API.¶
References: #6225, pull request 6379
Retry loading RPZ zones from server when they fail initially.¶
References: #6238, pull request 6237, pull request 6293, pull request 6336
Fix ECS-based cache entry refresh code.¶
References: pull request 6300
Fix ECS-specific NS AAAA not being returned from the cache.¶
References: #6319, pull request 6320
This is the second release in the 4.1 train.
This release fixes PowerDNS Security Advisory 2018-01.
The full release notes can be read on the blog.
This is a release on the stable branch, containing a fix for the abovementioned security issue and several bug fixes from the development branch.
Don’t process records for another class than IN. We don’t use records of another class than IN, but we used to store some of them in the cache which is useless. Just skip them.¶
References: #6198, pull request 6085)¶
References: pull request 6215
Fix the computation of the closest encloser for positive answers. When the positive answer is expanded from a wildcard with NSEC3, the closest encloser is not always parent of the qname, depending on the number of labels in the initial wildcard.¶
References: #6199, pull request 6092
Pass the correct buffer size to
arecvfrom(). The incorrect size
could possibly cause DNSSEC failures.¶
References: #6200, pull request 6095
Fix to make
primeHints threadsafe, otherwise there’s a small
chance on startup that the root-server IPs will be incorrect.¶
References: #6212, pull request 6209
Don’t validate signature for “glue” CNAME, since anything else than the initial CNAME can’t be considered authoritative.¶
References: #6201, pull request 6137
This is the first release in the 4.1 train.
The full release notes can be read on the blog.
This is a major release containing significant speedups (both in throughput and latency), enhanced capabilities and a highly conformant and robust DNSSEC validation implementation that is ready for heavy production use. In addition, our EDNS Client Subnet implementation now scales effortlessly to networks needing very fine grained scopes (as used by some ‘country sized’ service providers).
Changes since 4.1.0-rc3:
Dump the validation status of negcache entries, fix DNSSEC type.¶
References: pull request 5972
Fix DNSSEC validation of DS denial from the negative cache.¶
References: pull request 5978
Store additional records as non-auth, even on AA=1 answers.¶
References: pull request 5997
Don’t leak when the loading a public ECDSA key fails.¶
References: pull request 6008
When validating DNSKeys, the zone should be part of the signer.¶
References: pull request 6009
Cache Secure validation state when inserting negcache entries.¶
References: pull request 5980
The third Release Candidate adds support for Botan 2.x (and removes support for Botan 1.10!), has a lot of DNSSEC fixes, features a cleaned up web UI and has miscellaneous minor improvements.
Add the DNSSEC validation state to the
DNSQuestion Lua object
(although the ability to update the validation state from these
hooks is postponed to after 4.1.0).¶
References: #5888, pull request 5895
Add support for Botan 2.x and remove support for Botan 1.10.¶
References: #2250, #5797, pull request 5498
Print more details of trust anchors. In addition, the trace output that mentions if data from authoritative servers gets accepted now also prints the TTL and clarifies the ‘place’ number previously printed.¶
References: pull request 5876
Better support for deleting entries in
NetmaskTree and
NetmaskGroup.¶
References: pull request 5616
Prevent possible downgrade attacks in the recursor.¶
References: pull request 5889
Split NODATA / NXDOMAIN NSEC wildcard denial proof of existence. Otherwise there is a very real risk that a NSEC will cover a more specific wildcard and we end up with what looks like a NXDOMAIN proof but is a NODATA one.¶
References: #5882, pull request 5885
Fix incomplete validation of cached entries.¶
References: pull request 5904
Fix going Insecure on NSEC3 hashes with too many iterations, since we could have gone Bogus on a positive answer synthesized from a wildcard if the corresponding NSEC3 had more iterations that we were willing to accept, while the correct result is Insecure.¶
References: pull request 5912
Sort NS addresses by speed and remove old ones.¶
References: #1066, pull request 5877
Purge
nsSpeeds entries even if we get less than 2 new entries.¶
References: pull request 5896
Add EDNS to truncated, servfail answers.¶
References: #5618, pull request 5881
Use
_exit() when we really really want to exit, for example
after a fatal error. This stops us dying while we die. A call to
exit() will trigger destructors, which may paradoxically stop
the process from exiting, taking down only one thread, but harming
the rest of the process.¶
References: pull request 5917
In the recursor secpoll code, we assumed the TXT record would be the first record first record we received. Sometimes it was the RRSIG, leading to a silent error, and no secpoll check. Fixed the assumption, added an error.¶
References: pull request 5930
Don’t crash when asked to run with zero threads.¶
References: pull request 5938
Only accept types not matching the query if we asked for ANY. Even from forward-recurse servers.¶
References: #5934, pull request 5939
Allow the use of a ‘self-resolving’ NS if cached A / AAAA exists. Before this, we could skip a perfectly valid NS for which we had retrieved the A and / or AAAA entries, for example via a glue.¶
References: #2758, pull request 5937
Add the config-name argument to the definition of configname. There was a bug where the config-name parameter was not used to change the path of the config file. This meant that some commands via rec_control (e.g. reload-acls) would fail when run against a recursor which had config-name defined. The correct behaviour was present in some, but not all, definitions of configname. (@jake2184)¶
References: pull request 5961
The second Release Candidate contains several correctness fixes for DNSSEC, mostly in the area of verifying negative responses.
Don’t directly store NSEC3 records in the positive cache.¶
References: pull request 5834
Improve logging for the built-in webserver and the Carbon sender.¶
References: pull request 5805
New b.root ipv4 address (Kees Monshouwer).¶
References: #5663, pull request 5824
Add experimental metrics that track the time spent inside PowerDNS per query. These metrics ignore time spent waiting for the network.¶
References: pull request 5774
Add log-timestamp setting. This option can be used to disable
printing timestamps to stdout, this is useful when using
systemd-journald
or another supervisor that timestamps output by itself.¶
References: pull request 5842
Check that the NSEC covers an empty non-terminal when looking for NODATA.¶
References: pull request 5808
Disable validation for infrastructure queries (e.g. when recursing for a name). Also validate entries from the Negative cache if they were not validated before.¶
References: #5827, pull request 5835
Fix DNSSEC validation for denial of wildcards in negative answers and denial of existence proofs in wildcard-expanded positive responses.¶
References: #5861, pull request 5868
Fix DNSSEC validation when using
-flto.¶
References: pull request 5873
Lowercase all outgoing qnames when lowercase-outgoing is set.¶
References: pull request 5740
Create socket-dir from the init-script.¶
References: #5439, pull request 5762
Fix crashes with uncaught exceptions in MThreads.¶
References: pull request 5803!
Improve
--quiet=false output to include DNSSEC and more timing details.¶
References: pull request 5756
Add DNSSEC test vectors for RSA, ECDSA, ed25519 and GOST.¶
References: pull request 5733
Wrap the webserver’s and Resolver::tryGetSOASerial objects into smart pointers (also thanks to Chris Hofstaedtler for reviewing!)¶
References: pull request 5543
Add more unit tests for the NetmaskTree and ECS cache index.¶
References: pull request 5545
Switch the default webserver’s ACL to
127.0.0.1, ::1.¶
References: pull request 5588
Add help text on autodetecting systemd support. (Ruben Kerkhof thanks for reporting!)¶
References: #5524, pull request 5598
Add
log-rpz-changes to log RPZ additions and removals.¶
References: pull request 5622
Log the policy type (QName, Client IP, NS IP…) over protobuf.¶
References: pull request 5621
Remove unused SortList compare operator for ComboAddress.¶
References: pull request 5637
Add support for dumping the in-memory RPZ zones to a file.¶
References: pull request 5620
Support for identifying devices by id such as mac address.¶
References: pull request 5646
Implement dynamic cache sizing.¶
References: pull request 5699
Improve dnsbulktest experience in Travis for more robustness.¶
References: pull request 5755
Set
TC=1 if we had to omit part of the AUTHORITY section.¶
References: pull request 5772
autoconf: set
--with-libsodium to
auto.¶
References: pull request 5764
Don’t fetch the DNSKEY of a zone to validate the DS of the same zone.¶
References: pull request 5569
Improve DNSSEC debug logging,¶
References: pull request 5614
Add NSEC records on nx-trust cache hits.¶
References: #5649, pull request 5672
Handle NSEC wrap-around.¶
References: #5650, pull request 5671
Fix erroneous check for section 4.1 of rfc6840.¶
References: #5648, #5651, pull request 5670
Handle direct NSEC queries.¶
References: #5705, pull request 5715
Detect zone cuts by asking for DS instead of NS.¶
References: #5681, pull request 5716
Do not allow direct queries for RRSIG or NSEC3.¶
References: #5735, pull request 5738
The target zone being insecure doesn’t mean that the denial of the DS is too, if the parent zone is Secure..¶
References: pull request 5771
Add a missing header for PRId64 in the negative cache, required on EL5/EL6.¶
References: pull request 5530
Prevent an infinite loop if we need auth and the best match is not.¶
References: pull request 5549
Be more careful about the validation of negative answers.¶
References: pull request 5570
Fix libatomic detection on ppc64. (Sander Hoentjen)¶
References: #5456, pull request 5599
Fix sortlist in the presence of CNAME. (Benoit Perroud thanks for reporting this issue!)¶
References: #5357, pull request 5615
Fix cache handling of ECS queries with a source length of 0.¶
References: pull request 5515
Handle SNMP alarms so we can reconnect to the master.¶
References: #5327, pull request 5328
Fix Recursor 4.1.0 alpha 1 compilation on FreeBSD. (@RvdE)¶
References: pull request 5662
Remove pdns.PASS and pdns.TRUNCATE.¶
References: pull request 5739
Fix a crash when getting a public GOST key if the private one is not set.¶
References: pull request 5734
Don’t negcache entries for longer than their RRSIG validity.¶
References: pull request 5773
Gracefully handle Socket::accept() returning a null pointer on EAGAIN.¶
References: pull request 5792
This is the first release of the PowerDNS Recursor in the 4.1 release train. This release contains several performance and correctness improvements in the EDNS Client subnet area, as well as better DNSSEC processing.
Add support for RPZ wildcarded target names.¶
References: #5237, pull request 5265
Add server-side TCP Fast Open support. This adds a new option tcp-fast-open.¶
References: #5128, pull request 5138
Pass
tcp to
gettag() to allow a script to take different actions whether a query came in over TCP or UDP.¶
References: pull request 4569
Allow setting the requestor ID field in the
DNSQuestion from all hooks.¶
References: pull request 4569
Implement CNAME wildcards in recursor authoritative component.¶
References: #2818, pull request 5063
Allow returning the
DNSQuestion.data table from
gettag().¶
References: #4981, pull request 4982
References: pull request 4990, pull request 5404
Allow access to EDNS options from the
gettag() hook.¶
References: #5195, pull request 5198
Pass
tcp to
gettag(), allow setting the requestor ID from hooks.¶
References: pull request 4569
Allow retrieving stats from Lua via the
getStat() call.¶
References: pull request 5293
References: pull request 5409
Add a cpu-map directive to set CPU affinity per thread.¶
References: pull request 5482
Implement “on-the-fly” DNSSEC processing. This places the DNSSEC processing alongside the regular recursion, reducing possible cornercases, adding unit tests and making the code better maintainable.¶
References: #4254, #4362, #4490, #4994, pull request 5223, pull request 5463, pull request 5486, pull request 5528
Use ECS when updating the validation state if needed.¶
References: pull request 5484
Use the RPZ zone’s TTL and add a new maxTTL setting.¶
References: pull request 5057
RPZ updates are done zone by zone, zones are now shared pointers.¶
References: #5231, #5236, pull request 5275, pull request 5307
Split SyncRes::doResolveAt, add const and static whenever possible. Possibly improving performance while making the code easier to maintain.¶
References: pull request 5106
Packet cache speedup and cleanup.¶
References: pull request 5102
Make Lua mandatory for recursor builds.¶
References: pull request 5146
Use one listening socket per thread when reuseport is enabled.¶
References: pull request 5103, pull request 5487
Stop (de)serializing
DNSQuestion.data.¶
References: pull request 5141
Refactor the negative cache into a class.¶
References: pull request 5226
Only check the netmask for subnet specific cache entries.¶
References: pull request 5319
Refactor and split
SyncRes::doResolveAt(), making it easier to understand.
Get rid of
SyncRes::d_nocache, makes sure we can’t get into a root refresh loop.
Limit the use of global variables in SyncRes, to make it easier to understand the interaction between components¶
References: pull request 5236
Add an ECS index to the cache¶
References: pull request 5461, pull request 5472
When dumping the cache, also dump RRSIGs.¶
References: pull request 5511
Don’t always override loglevel to 6.¶
References: pull request 5485
Make more specific Netmasks < to less specific ones.¶
References: pull request 5406, pull request 5530
Fix validation at the exact RRSIG inception or expiration time.¶
References: pull request 5525
Fix
remote/
local inversion in
preoutquery().¶
References: #4969, pull request 4984
Show a useful error when an invalid lua-config-file is configured.¶
References: #4939, #5075, pull request 5078
Fix
DNSQuestion members alterations from Lua not being taken into account.¶
References: pull request 4860
Ensure locks can not be copied.¶
References: pull request 5209
Only apply root-nx-trust if the received SOA is “.”.¶
References: #5246, pull request 5252
Don’t throw an exception when logging to protobuf without a question set.¶
References: pull request 5312
Correctly truncate EDNS Client Subnetmasks.¶
References: pull request 5320
Clean up auth/recursor code mismatches in the API (Chris Hofstaedtler).¶
References: #5398, pull request 5466
Only increase
no-packet-error on the first read.¶
References: #5474, pull request 5474 | https://docs.powerdns.com/recursor/changelog/4.1.html | 2022-01-16T22:01:00 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.powerdns.com |
Splunk Web is the primary interface for searching, problem investigation, reporting on results, and administrating Splunk platform deployments.
About Splunk Home
Splunk Home is the initial page in Splunk Web. Splunk Home is an interactive portal to the data and applications that you can access from this Splunk Enterprise instance. The main parts of the Splunk Home page are the Splunk bar, the Apps panel, and the Explore Splunk panel.
The following screen image shows the Splunk Home page for Splunk Enterprise. Splunk Cloud has a similar Home Page. The differences between Splunk Enterprise and Splunk Cloud are described in the following sections. Enterprise
- You can take a product tour, add data, browse for new apps, or access the documentation.
- Splunk Cloud
- You can take a product tour or access the documentation that is used the most.
Splunk bar
The Splunk bar appears on every page in Splunk Web. You use this bar to switch between apps, manage and edit your Splunk platform configuration, view system-level messages, and monitor the progress of search jobs.
1. Click Search & Reporting.
- When you are in an app, the Application menu is added to the Splunk bar. Use this menu to switch between apps.
We will explore the Search app in detail. For now, let's return to Splunk Home.
2. Enterprise
- The Account menu displays Administrator for now, but this menu is your Account menu. It shows Administrator initially, because that is the default user name for a new installation.
- 2. In the Full name field, type your first name and surname.
- For this tutorial, we will not change the other settings.
- 3. Click Save.
- 4. Click the Splunk logo to return to Splunk Home.
- Splunk Cloud
- The Account menu displays your name.
- 2. The Full name field should list your first name and surname. You can change the order of the names, or type a nickname.
- For this tutorial, we will not change the other settings.
- 3. Click Save.
- 4. Enterprise
- The Help menu contains a set of links to the product release notes, tutorials, Splunk Answers, and the Splunk Support and Services page. You can also search the online documentation.
- Splunk Cloud
- The Support & Services menu contains a set of links to Splunk Answers, the Documentation home page,! | https://docs.splunk.com/Documentation/Splunk/6.4.11/SearchTutorial/NavigatingSplunk | 2022-01-16T21:54:53 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
An INSERT process performs the following actions:
- Sets a WRITE lock on the rowkey, partition, or table, as appropriate.
- Performs the entire INSERT operation as an all-or-nothing operation in which every row is inserted successfully or no rows are inserted.
This is to prevent a partial insert from occurring.
The rules for rolling back multistatement INSERT requests for statement independence frequently enable a more relaxed handling of INSERT errors within a transaction or multistatement request. For information about failed INSERT operations in situations that involve statement independence, see Multistatement and Iterated INSERT Requests.
The INSERT operation takes more processing time on a table defined with FALLBACK or a secondary, join, or hash index, because the FALLBACK copy of the table or index also must be changed. | https://docs.teradata.com/r/FaWs8mY5hzBqFVoCapztZg/IZkkMkXHTwQNRSoLjS3SyQ | 2022-01-16T21:47:19 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.teradata.com |
Launch this Stack
MODX packaged by Bitnami for Microsoft Azure
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
MODX Revolution is an easy-to-use Content Management System (CMS) and Application Framework rolled into one.
Need more help? Find below detailed instructions for solving complex issues. | https://docs.bitnami.com/azure/apps/modx/ | 2022-01-16T23:15:47 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.bitnami.com |
Lifecycle Manager configuration options
Configuration options available in opscenterd.conf for Lifecycle Manager.
Reference of configuration options in available opscenterd.conf for Lifecycle Manager. After changing properties in the opscenterd.conf file, restart OpsCenter for the changes to take effect.
- [lifecycle_manager] db_location
- The location of the lcm.db database used for storing Lifecycle Manager information. Default: /var/lib/opscenter/lcm.db
Note: The data (cluster topology, configuration profiles, credentials, repositories, job history, and so forth) for Lifecycle Manager is stored in the lcm.db database. Your organization is responsible for backing up the lcm.db database. You must also configure failover to mirror the lcm.db database.
- . | https://docs.datastax.com/en/opscenter/6.7/opsc/LCM/LCMopscConfigRef.html | 2022-01-16T21:48:31 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.datastax.com |
CommOps Team Member
How to become a CommOps Team Member:
Please Refer to to this section.
Mindshare Committee
Fedora Design & Badges
Fedora Council
Community Outreach teams (Ambassadors, Join SIG, )
Current members
The FAS group currently holds the list of people who are currently
Contact information
Discourse forum:
IRC channel: #fedora-commops on Libera.Chat
Telegram group: @fedoracommops on Telegram
Matrix/Element:”. | https://docs.fedoraproject.org/fil/commops/handbooks/commops-member/ | 2022-01-16T22:11:05 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.fedoraproject.org |
- pipelines at
(need Developer access permissions). Results are reported in the
#qa-nightly Slack channel.
Testing staging
We run scheduled pipelines each night to test staging.
You can find these.
Please note, Maintainer permission. clicking the name (not the play icon) of one of the parallel jobs,
you are prompted to enter variables. You can use any of the variables
that can be used with
gitlab-qa
as well as these:
For now manual jobs with custom variables don want to run the existing tests against a live GitLab instance or against a pre-built Docker image,.. | https://docs.gitlab.com/13.12/ee/development/testing_guide/end_to_end/ | 2022-01-16T22:46:12 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.gitlab.com |
The:
These tracking technologies are necessary for the website to function and can’t be switched off in our systems. You can set your browser to block or alert you about these tracking technologies, but it could result in parts of the site not working properly..
These tracking technologies enable the website to provide enhanced functionality and personalization to make the content of the site more specific for you as a user. If you don’t allow these tracking technologies, all of the functions we use to personalize the site will not function properly.).
If you want to change your settings for tracking technologies on our website, you can click the button below
We encourage you to review our Privacy notice for further information on the processing of your personal data.
Klarna has a team of data protection specialists working solely with data protection and privacy. We also have a special team of customer service specialists for data protection matters. You can always reach us at [email protected].
This tracking technology notice for docs.klarna.com was last updated on 30 March 2021. | https://docs.klarna.com/policies/cookie-policy/ | 2022-01-16T22:26:50 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.klarna.com |
1.12 Tuples and Options
If you want to combine a small number of values in a single value, and if the values have different types (so that a list doesn’t work), you can use a tuple as an alternative to creating a new datatype with a single variant.
The values form creates a tuple from any number of values. The type of a tuple reveals the type of every component value in the tuple, separating the types with *.
Using values, this consume function can effectively return two values each time that it is called:
To extract the component values from a tuple, match the tuple with names using define-values.
The convenience functions fst and snd can be used in the special case of a 2-value tuple to extract the first or second component.
Sometimes, instead of always returning multiple values, you’ll want a function that returns either one value or no value. A tuple is no help for that case, but Plait predefines a helpful datatype called Optionof:
The 'a in this definition of Optionof indicates that you can return any kind of value in a some. | https://docs.racket-lang.org/plait/tuples-tutorial.html | 2022-01-16T22:57:32 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.racket-lang.org |
Faceted Search¶
The library comes with a simple abstraction aimed at helping you develop faceted navigation for your data.
Note
This API is experimental and will be subject to change. Any feedback is welcome.
Configuration¶
You can provide several configuration options (as class attributes) when
declaring a
FacetedSearch subclass:
index
- the name of the index (as string) to search through, defaults to
'_all'.
doc_types
- list of
Documentsubclasses or strings to be used, defaults to
['_all'].
fields
- list of fields on the document type to search through. The list will be passes to
MultiMatchquery so can contain boost values (
'title^5'), defaults to
['*'].
facets
- dictionary of facets to display/filter on. The key is the name displayed and values should be instances of any
Facetsubclass, for example:
{'tags': TermsFacet(field='tags')}
sort
- tuple or list of fields on which the results should be sorted. The format of the individual fields are to be the same as those passed to
sort().
Facets¶
There are several different facets available:
TermsFacet
- provides an option to split documents into groups based on a value of a field, for example
TermsFacet(field='category')
DateHistogramFacet
- split documents into time intervals, example:
DateHistogramFacet(field="published_date", interval="day")
HistogramFacet
- similar to
DateHistogramFacetbut for numerical values:
HistogramFacet(field="rating", interval=2)
RangeFacet
- allows you to define your own ranges for a numerical fields:
RangeFacet(field="comment_count", ranges=[("few", (None, 2)), ("lots", (2, None))])
NestedFacet
- is just a simple facet that wraps another to provide access to nested documents:
NestedFacet('variants', TermsFacet(field='variants.color'))
By default facet results will only calculate document count, if you wish for
a different metric you can pass in any single value metric aggregation as the
metric kwarg (
TermsFacet(field='tags', metric=A('max',
field=timestamp))). When specifying
metric the results will be, by
default, sorted in descending order by that metric. To change it to ascending
specify
metric_sort="asc" and to just sort by document count use
metric_sort=False.
Advanced¶
If you require any custom behavior or modifications simply override one or more of the methods responsible for the class’ functions:
search(self)
- is responsible for constructing the
Searchobject used. Override this if you want to customize the search object (for example by adding a global filter for published articles only).
query(self, search)
- adds the query postion of the search (if search input specified), by default using
MultiFieldquery. Override this if you want to modify the query type used.
highlight(self, search)
- defines the highlighting on the
Searchobject and returns a new one. Default behavior is to highlight on all fields specified for search.
Usage¶
The custom subclass can be instantiated empty to provide an empty search
(matching everything) or with
query and
filters.
query
- is used to pass in the text of the query to be performed. If
Noneis passed in (default) a
MatchAllquery will be used. For example
'python web'
filters
- is a dictionary containing all the facet filters that you wish to apply. Use the name of the facet (from
.facetsattribute) as the key and one of the possible values as value. For example
{'tags': 'python'}.
Response¶
the response returned from the
FacetedSearch object (by calling
.execute()) is a subclass of the standard
Response class that adds a
property called
facets which contains a dictionary with lists of buckets -
each represented by a tuple of key, document count and a flag indicating
whether this value has been filtered on.
Example¶
from datetime import date') } def search(self): # override methods to add custom pieces s = super().search() return s.filter('range', publish_from={'lte': 'now/h'}) bs = BlogSearch('python web', {'publishing_frequency': date(2015, 6)}) response = bs.execute() # access hits and other attributes as usual total = response.hits.total print('total hits', total.relation, total.value) for hit in response: print(hit.meta.score, hit.title) for (tag, count, selected) in response.facets.tags: print(tag, ' (SELECTED):' if selected else ':', count) for (month, count, selected) in response.facets.publishing_frequency: print(month.strftime('%B %Y'), ' (SELECTED):' if selected else ':', count) | https://elasticsearch-dsl.readthedocs.io/en/7.1.0/faceted_search.html | 2022-01-16T22:47:20 | CC-MAIN-2022-05 | 1642320300244.42 | [] | elasticsearch-dsl.readthedocs.io |
Emptying a bucket
You can empty a bucket's contents using the Amazon S3 console, AWS SDKs, or AWS Command Line Interface (AWS CLI). When you empty a bucket, you delete all the objects, but you keep the bucket. After you empty a bucket, it cannot be undone. When you empty a bucket that has S3 Bucket Versioning enabled or suspended, all versions of all the objects in the bucket are deleted. For more information, see Working with objects in a versioning-enabled bucket.
You can also specify a lifecycle configuration on a bucket to expire objects so that Amazon S3 can delete them. For more information, see Setting lifecycle configuration on a bucket
Troubleshooting
Objects added to the bucket while the empty bucket action is in progress might be deleted. To prevent new objects from being added to a bucket while the empty bucket action is in progress, you might need to stop your AWS CloudTrail trails from logging events to the bucket. For more information, see Turning off logging for a trail in the AWS CloudTrail User Guide.
Another alternative to stopping CloudTrail trails from being added to the bucket is to add a deny s3:PutObject statement to your bucket policy. If you want to store new objects in the bucket, you should remove the deny s3:PutObject statement from your bucket policy. For more information, see Example — Object operations and IAM JSON policy elements: Effect in the IAM User Guide
You can use the Amazon S3 console to empty a bucket, which deletes all of the objects in the bucket without deleting the bucket.
To empty an S3 bucket
Sign in to the AWS Management Console and open the Amazon S3 console at
.
In the Bucket name list, select the option next to the name of the bucket that you want to empty, and then choose Empty.
On the Empty bucket page, confirm that you want to empty the bucket by entering the bucket name into the text field, and then choose Empty.
Monitor the progress of the bucket emptying process on the Empty bucket: Status page.
You can empty a bucket using the AWS CLI only if the bucket does not have Bucket
Versioning enabled. If versioning is not enabled, you can use the
rm
(remove) AWS CLI command with the
--recursive parameter to empty the
bucket (or remove a subset of objects with a specific key name prefix).
The following
rm command removes objects that have the key name
prefix
doc, for example,
doc/doc1 and
doc/doc2.
$aws s3 rm s3://bucket-name/doc --recursive
Use the following command to remove all objects without specifying a prefix.
$aws s3 rm s3://bucket-name --recursive
For more information, see Using high-level S3 commands with the AWS CLI in the AWS Command Line Interface User Guide.
You can't remove objects from a bucket that has versioning enabled. Amazon S3 adds a delete marker when you delete an object, which is what this command does. For more information about S3 Bucket Versioning, see Using versioning in S3 buckets.
You can use the AWS SDKs to empty a bucket or remove a subset of objects that have a specific key name prefix.
For an example of how to empty a bucket using AWS SDK for Java, see Deleting a bucket. The code deletes all objects, regardless of whether the bucket has versioning enabled, and then it deletes the bucket. To just empty the bucket, make sure that you remove the statement that deletes the bucket.
For more information about using other AWS SDKs, see Tools for Amazon Web Services
If you use a lifecycle policy to empty your bucket, the lifecycle policy should include current versions, non-current versions, delete markers, and incomplete multipart uploads.
You can add lifecycle configuration rules to expire all objects or a subset of objects that have a specific key name prefix. For example, to remove all objects in a bucket, you can set a lifecycle rule to expire objects one day after creation.
Amazon S3 supports a bucket lifecycle rule that you can use to stop multipart uploads that don't complete within a specified number of days after being initiated. We recommend that you configure this lifecycle rule to minimize your storage costs. For more information, see Configuring a bucket lifecycle policy to abort incomplete multipart uploads.
For more information about using a lifecycle configuration to empty a bucket, see Setting lifecycle configuration on a bucket and Expiring objects. | https://docs.aws.amazon.com/AmazonS3/latest/userguide/empty-bucket.html | 2022-01-16T23:54:39 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.aws.amazon.com |
Using Lumerical
Lumerical develops photonic simulation software – tools which enable product designers to understand light, and predict how it behaves within complex structures, circuits, and systems. These tools allow scientists and engineers to exploit recent advances to photonic science and material processing to develop high impact technologies across exciting fields including augmented reality, digital imaging, solar energy, and quantum computing.
3D/2D Maxwell's Solver for Nanophotonic Devices
FDTD Solutions.
Comprehensive Optical Waveguide Design Environment
Whether you are working on fiber optics or integrated photonics, Solutions has everything you need to get the most out of your waveguide and coupler designs. The Bidirectional Eigenmode expansion and varFDTD engines easily handle both large planar structures and long propagation lengths, providing accurate spatial field, modal frequency, and overlap analysis. | https://docs.hpc.udel.edu/software/lumerical/lumerical | 2022-01-16T22:31:26 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.hpc.udel.edu |
Overview of the developer portal
Developer portal is an automatically generated, fully customizable website with the documentation of your APIs. It is where API consumers can discover your APIs, learn how to use them, request access, and try them out.
As introduced in this article, you can customize and extend the developer portal for your specific scenarios..
- Open a pull request for the API Management team to merge new functionality to the managed portal's codebase.
For extensibility details and instructions, refer to the GitHub repository and the tutorial to implement a widget. The tutorial to customize the managed portal walks you through the portal's administrative panel, which is common for managed and self-hosted versions.
Next steps
Learn more about the new developer portal:
- Access and customize the managed developer portal
- Set up self-hosted version of the portal
- Implement your own widget
Browse other resources: | https://docs.microsoft.com/en-gb/azure/api-management/api-management-howto-developer-portal | 2022-01-16T22:56:18 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['media/api-management-howto-developer-portal/cover.png',
'API Management developer portal'], dtype=object) ] | docs.microsoft.com |
check_point.mgmt.cp_mgmt_security_zone_facts – Get security-zone objects facts on Check Point over Web Services API
Note
This plugin is part of the check_point.mgmt collection (version 2.2_security_zone_facts.
New in version 2.9: of check_point.mgmt
Synopsis
Get security-zone objects facts on Check Point devices.
All operations are performed over Web Services API.
This module handles both operations, get a specific object and get several objects, For getting a specific object use the parameter ‘name’. | https://docs.ansible.com/ansible/latest/collections/check_point/mgmt/cp_mgmt_security_zone_facts_module.html | 2022-01-16T21:33:33 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.ansible.com |
Data Segments
Using Aporia, you can define data segments to improve your monitoring capabilities.
Suggested Data Segments
After you create a version for your model, Aporia will provide you with automatic data segment suggestions for each feature and raw_inputs defined in the schema.
NOTE: For the suggestions engine to work great, more than 100 predictions are required.
Simply click on the + sign of any of the suggested fields to create a data segment based on that suggestion.
Creating a Data Segment
When creating a new data segment group, you can choose between an automatic definition and a custom definition.
Automatic Data Segments
When creating a data segment group using the "Automatic Segment" option, multiple data segments are defined using a single, simple rule:
- For numeric fields, segments are defined by a minimum value, a maximum value, and the interval between two segments
- For vector fields, segments are defined as in the numeric, but in terms of the text length size
- For categorical, boolean and string values, a segment will be defined for each unique value of the field
Custom Data Segments
When creating a data segment using the "Custom Segment" option, multiple rules can be used to define a single data segment:
NOTE: Currently there is no data segmentation for
vector field | https://docs.aporia.com/getting-started/data-segments/ | 2022-01-16T22:10:03 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['../../img_data_segments/suggestions.png',
'Data Segment Suggestion'], dtype=object)
array(['../../img_data_segments/automatic.png', 'Automatic Segment'],
dtype=object)
array(['../../img_data_segments/custom.png', 'Custom Segment'],
dtype=object) ] | docs.aporia.com |
If you want a dedicated appliance for log collection, configure an M-100 or M-500 appliance in Log Collector mode. To do this, you first perform the initial configuration of the appliance in Panorama mode, which includes licensing, installing software and content updates, and configuring the management (MGT) interface. You then switch the M-100 or M-500 appliance to Log Collector mode and complete the Log Collector configuration. Additionally, if you want to use dedicated interfaces (recommended) instead of the MGT interface for log collection and Collector Group communication, you must first configure the interfaces for the Panorama management server, then configure them for the Log Collector, and then perform a Panorama commit followed by a Collector Group commit.
Document:Panorama™ Administrator’s Guide
Set up the M-Series Appliance as a Log Collector
Last Updated:
Thu May 07 10:13:53 PDT 2020 | https://docs.paloaltonetworks.com/panorama/7-1/panorama-admin/set-up-panorama/set-up-the-m-series-appliance-as-a-log-collector.html | 2022-01-16T22:56:23 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.paloaltonetworks.com |
Discover your template css-styles and learn how to change them
There:
- You know what is the "container" that holds your information "table.moduletable.th" so this is the one you have to do something with and you can locate it in your directory structure and do something with it with help of Notepad or any other text editor such as Notepad (or advance tools such as Dreamweaver/Zend/etc).
- Now you wanted to change the size etc :::: see here what is stated? color/size/weight/width/spacing/etc you can all change to your likings!
- save your changes and done!
See how helpful this extension is???
Now how to get this properly installed which is very simple but you might run into some small issues. Follow this and it will work perfectly:
- Download Firefox if you have not done yet.
- Install and choose "custom install" and check that the the development tools are tagged! (yes!).
- Install FF goes for itself.
- Start FF > Tools > Extensions > get more extensions > developers tools > page 7 (last page) > web developer 1.0.2 (version when writing this piece) and install.
- Close and re-open your browser and go to CSS.... last option is View Style Information (Ctrl+Shift+Y see "Options" for keyboard shortcuts).:
- Open Tools from the Browser and see if "Dom Inspector" is present as choice. If yes follow the steps above and install the extension and it wil work. If DOM Inspector is NOT present do the following steps (Needed otherwise the View Style Information does not work since it needs the Dom-function).
- Re-install Firefox with development tools! This will take care that the Dom Inspector is installed as well. Dont be afraid...all your FF-settings are preserved when re-installing... no fear, but you need to re-install otherwise it will not work!
- Check after installation if Dom Inspector is present under the Tools-Tab
- If present go to the extension pages and follow the steps as described above.! | https://docs.joomla.org/Discover_your_template_css-styles_and_learn_how_to_change_them | 2017-03-23T04:22:11 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.joomla.org |
Introduction to Lollipop
- PDF for offline use
-
- Related Links:
-
Let us know how you feel about this
0/250
last updated: 2017-02
This article provides a high level overview of the new features introduced in Android 5.0 (Lollipop). These features include a new user interface style called Material Theme, as well as new supporting features such as animations, view shadows, and drawable tinting. Android 5.0 also includes enhanced notifications, two new UI widgets, a new job scheduler, and a handful of new APIs to improve storage, networking, connectivity, and multimedia capabilities.
Overview
Android 5.0 (Lollipop) introduces a new design language, Material Design, and with it a supporting cast of new features to make apps easier and more intuitive to use. With Material Design, Android 5.0 not only gives Android phones a facelift; it also provides a new set of design rules for Android-based tablets, desktop computers, watches, and smart TVs. These design rules emphasize simplicity and minimalism while making use of familiar tactile attributes (such as realistic surface and edge cues) to help users quickly and intuitively understand the interface.
Material Theme is the embodiment of these UI design principles in Android. This article begins by covering Material Theme's supporting features:
Animations – Touch feedback animations, activity transition animations, view state transition animations, and a reveal effect.
View shadows and elevation – Views now have an
elevationproperty; views with higher
elevationvalues cast larger shadows on the background.
Color features – Drawable tinting makes it possible for you to reuse image assets by changing their color, and prominent color extraction helps you dynamically theme your app based on colors in an image.
Many Material Theme features are already built into the Android 5.0 UI experience, while others must be explicitly added to apps. For example, some standard views (such as buttons) already include touch feedback animations, while apps must enable most view shadows.
In addition to the UI improvements brought about through Material Theme, Android 5.0 also includes several other new features that are covered in this article:
Enhanced notifications – Notifications in Android 5.0 have been significantly updated with a new look, support for lockscreen notifications, and a new Heads-up notification presentation format.
New UI widgets – The new
RecyclerViewwidget makes it easier for apps to convey large data sets and complex information, and the new
CardViewwidget provides a simplified card-like presentation format for displaying text and images.
New APIs – Android 5.0 adds new APIs for multiple network support, improved Bluetooth connectivity, easier storage management, and more flexible control of multimedia players and camera devices. A new job scheduling feature is available to run tasks asynchronously at scheduled times. This feature helps to improve battery life by, for example, scheduling tasks to take place when the device is plugged in and charging.
Requirements
The following is required to use the new Android 5.0 features in Xamarin-based apps:
Xamarin.Android – Xamarin.Android 4.20 or later must be installed and configured with either Visual Studio or Xamarin Studio. If you are using Xamarin Studio, version 5.5.4 or later is required.
Android SDK – Android 5.0 (API 21) or later must be installed via the Android SDK Manager.
Java Developer Kit – Xamarin.Android requires JDK 1.8 or later if you are developing for API level 24 or greater (JDK 1.8 also supports API levels earlier than 24, including Lollipop). The 64-bit version of JDK 1.8 is required if you are using custom controls or the Forms Previewer.
You can continue to use JDK 1.7 if you are developing specifically for API level 23 or earlier.
Setting Up an Android 5.0 Project
To create an Android 5.0 project, you must install the latest tools and SDK packages. Use the following steps to set up a Xamarin.Android project that targets Android 5.0:
Install Xamarin.Android tools and activate your Xamarin license. See Setup and Installation for more information about installing Xamarin.Android.
If you are using Xamarin Studio, install the latest Android 5.0 updates.
Start the Android SDK Manager (in Xamarin Studio, use Tools > Open Android SDK Manager…) and install Android SDK Tools 23.0.5 or later:
Also, install the latest Android 5.0 SDK packages (API 21 or later):
For more information about using the Android SDK Manager, see SDK Manager.
Create a new Xamarin.Android project. If you are new to Android development with Xamarin, see Hello, Android to learn about creating Android projects. When you create an Android project, be sure to configure the version settings for Android 5.0. In Xamarin Studio, navigate to Project Options > Build > General and set Target framework to Android 5.0 (Lollipop) or later:
Under Project Options > Build > Android Application, set minimum and target Android version to Automatic - use target framework version:
Configure an emulator or an Android device to test your app. If you are using an emulator, see Configure the Emulator to learn how to configure an Android emulator for use with Xamarin Studio or Visual Studio. If you are using an Android device, see Setting Up the Preview SDK to learn how to update your device for Android 5.0. To configure your Android device for running and debugging Xamarin.Android applications, see Set Up Device for Development.
Note: If you are updating an existing Android project that was targeting the Android L Preview, you must update the Target Framework and Android version to the values described above.
Important Changes
Previously published Android apps could be affected by changes in Android 5.0. In particular, Android 5.0 uses a new runtime and a significantly changed notification format.
Android Runtime
Android 5.0 uses the new Android Runtime (ART) as the default runtime instead of Dalvik. ART implements several major new features:
Ahead-of-time (AOT) compilation – AOT can improve app performance by compiling app code before the app is first launched. When an app is installed, ART generates a compiled app executable for the target device.
Improved garbage collection (GC) – GC improvements in ART can also improve app performance. Garbage collection now uses one GC pause instead of two, and concurrent GC operations complete in a more timely fashion.
Improved app debugging – ART provides more diagnostic detail to help in analyzing exceptions and crash reports.
Existing apps should work without change under ART—except for apps that exploit techniques unique to the previous Dalvik runtime, which may not work under ART. For more information about these changes, see Verifying App Behavior on the Android Runtime (ART).
Notification Changes
Notifications have changed significantly in Android 5.0:
Sounds and vibration are handled differently – Notification sounds and vibrations are now handled by
Notification.Builderinstead of
Ringtone,
MediaPlayer, and
Vibrator.
New color scheme – In accordance with Material Theme, notifications are rendered with dark text over white or very light backgrounds. Also, alpha channels in notification icons may be modified by Android to coordinate with system color schemes.
Lockscreen notifications – Notifications can now appear on the device lockscreen.
Heads-up – High-priority notifications now appear in a small floating window (Heads-up notification) when the device is unlocked and the screen is turned on.
In most cases, porting existing app notification functionality to Android 5.0 requires the following steps:
Convert your code to use
Notification.Builder(or
NotificationsCompat.Builder) for creating notifications.
Verify that your existing notification assets are viewable in the new Material Theme color scheme.
Decide what visibility your notifications should have when they are presented on the lockscreen. If a notification is not public, what content should show up on the lockscreen?
Set the category of your notifications so they are handled correctly in the new Android 5.0 Do not disturb mode.
If your notifications present transport controls, display media playback status,
use
RemoteControlClient, or call
ActivityManager.GetRecentTasks, see
Important Behavior Changes
for more information about updating your notifications for Android 5.0.
For information about creating notifications in Android, see Local Notifications. The Compatibility section of this article explains how to create notifications that are downward-compatible with earlier versions of Android.
Material Theme
The new Android 5.0 Material Theme brings sweeping changes to the look and feel of the Android UI. Visual elements now use tactile surfaces that take on the bold graphics, typography, and bright colors of print-based design. Examples of Material Theme are depicted in the following screenshots:
Android 5.0 greets you with the home screen shown on the left. The center screenshot is the first screen of the app list, and the screenshot on the right is the Settings screen. Google’s Material Design specification explains the underlying design rules behind the new Material Theme concept.
Material Theme includes three built-in flavors that you can use in your
app: the
Theme.Material dark theme (the default), the
Theme.Material.Light theme, and the
Theme.Material.Light.DarkActionBar theme:
For more about using Material Theme features in Xamarin.Android apps, see Material Theme.
Animations
Android 5.0 provides touch feedback animations, activity transition animations, and view state transition animations to make app interfaces more intuitive to use. Also, Android 5.0 apps can use reveal effect animations to hide or reveal views. You can use curved motion settings to configure how quickly or slowly animations are rendered.
Touch Feedback Animations
Touch feedback animations provide users with visual feedback when a view
has been touched.:
.
For more on touch feedback animations in Android 5.0, see Customize Touch Feedback.
Activity Transition Animations
Activity transition animations give users a sense of visual continuity when one activity transitions to another. Apps can specify three types of transition animations:
Enter transition – For when an activity enters the scene.
Exit transition – For when an activity exits the scene.
Shared element transition – For when a view that is common to two activities changes as the first activity transitions to the next.
For example, the following sequence of screenshots illustrates a shared element transition:
A shared element (a photo of a caterpillar) is one of several views in the first activity; it enlarges to become the only view in the second activity as the first activity transitions to the second.
Enter Transition Animation Types
For enter transitions, Android 5.0 provides three types of animations:
Explode animation – Enlarges a view from the center of the scene.
Slide animation – Moves a view in from one of the edges of a scene.
Fade animation – Fades a view into the scene.
Exit Transition Animation Types
For exit transitions, Android 5.0 provides three types of animations:
Explode animation – Shrinks a view to the center of the scene.
Slide animation – Moves a view out to one of the edges of a scene.
Fade animation – Fades a view out of the scene.
Shared Element Transition Animation Types
Shared element transitions support multiple types of animations, such as:
Changing the layout or clip bounds of a view.
Changing the scale and rotation of a view.
Changing the size and scale type for a view.
For more about activity transition animations in Android 5.0, see Customize Activity Transitions.
View State Transition Animations
Android 5.0 makes it possible for animations to run when the state of a view changes. You can animate view state transitions by using one of the following techniques:
Create drawables that animate state changes associated with a particular view. The new
AnimatedStateListDrawableclass lets you create drawables that display animations between view state changes.
Define animation functionality that runs when the state of a view changes. The new
StateListAnimatorclass lets you define an animator that runs when the state of a view changes.
For more about view state transition animations in Android 5.0, see Animate View State Changes.
Reveal Effect
The reveal effect is a clipping circle that changes radius to reveal or hide a view. You can control this effect by setting the initial and final radius of the clipping circle. The following sequence of screenshots illustrates a reveal effect animation from the center of the screen:
The next sequence illustrates a reveal effect animation that takes place from the bottom left corner of the screen:
Reveal animations can be reversed; that is, the clipping circle can shrink to hide the view rather than enlarge to reveal the view.
For more information on the Android 5.0 reveal effect in, see Use the Reveal Effect.
Curved Motion
In addition to these animation features, Android 5.0 also provides new APIs that enable you to specify the time and motion curves of animations. Android 5.0 uses these curves to interpolate temporal and spatial movement during animations. Three curves are defined in Android 5.0:
Fast_out_linear_in – Accelerates quickly and continues to accelerate until the end of the animation.
Fast_out_slow_in – Accelerates quickly and slowly decelerates towards the end of the animation.
Linear_out_slow_in – Begins with a peak velocity and slowly decelerates to the end of the animation.
You can use the new
PathInterpolator class to specify how motion interpolation
takes place.
PathInterpolator is an interpolator that traverses animation paths
according to specified control points and motion curves. For more information about
how to specify curved motion settings in Android 5.0,
see Use Curved Motion.
View Shadows & Elevation
In Android 5.0, you can specify the elevation of a view by setting
a new
Z property. A greater
Z value causes the view to cast a
larger shadow on the background, making the view appear to float higher
above the background. You can set the initial elevation of a view by
configuring its
elevation attribute in the layout.
The following example illustrates the shadows cast by an empty
TextView control when its elevation attribute is set to 2dp, 4dp, and
6dp, respectively:
View shadow settings can be static (as shown above) or they can be used
in animations to make a view appear to temporarily rise above the view’s
background. You can use the
ViewPropertyAnimator class to animate
the elevation of a view. The elevation of a view is the sum of its
layout
elevation setting plus a
translationZ property that you
can set through a
ViewPropertyAnimator method call.
For more about view shadows in Android 5.0, see Defining Shadows and Clipping Views.
Color Features
Android 5.0 provides two new features for managing color in apps:
Drawable tinting lets you alter the colors of image assets by changing a layout attribute.
Prominent color extraction makes it possible for you to dynamically customize your app's color theme to coordinate with the color palette of a displayed image.
Drawable Tinting
Android 5.0 layouts recognize a new
tint attribute that you can use to
set the color of drawables without having to create multiple versions
of these assets to display different colors. To use this feature, you
define a bitmap as an alpha mask and use the
tint attribute to define
the color of the asset. This makes it possible for you to create assets
once and color them in your layout to match your theme.
In the following example, a single image asset—a white logo with a transparent background—is used to create tint variations:
This logo is displayed above a blue circular background as shown in
the following examples. The image on the left is how the logo appears
without a
tint setting. In the center image, the logo's
tint
attribute is set to a dark gray. In the image on the right,
tint is
set to a light gray:
For more about drawable tinting in Android 5.0, see Drawable Tinting.
Prominent Color Extraction
The new Android 5.0
Palette class lets you extract colors from an image
so that you can dynamically apply them to a custom color palette. The
Palette class extracts six colors from an image and labels these
colors according to their relative levels of color saturation and
brightness:
Vibrant
Vibrant dark
Vibrant light
Muted
Muted dark
Muted light
For example, in the following screenshots, a photo viewing app extracts the prominent colors from the image on display and uses these colors to adapt the color scheme of the app to match the image:
In the above screenshots, the action bar is set to the extracted “vibrant light” color and the background is set to the extracted “vibrant dark” color. In each example above, a row of small color squares is included to illustrate the palette colors that were extracted from the image.
For more about color extraction in Android 5.0, see Extracting Prominent Colors from an Image.
New UI Widgets
Android 5.0 introduces two new UI widgets:
RecyclerView– A view group that displays a list of scrollable items.
CardView– A basic layout with rounded corners.
Both widgets include baked-in support for Material Theme features; for
example,
RecyclerView uses animations for adding and removing views,
and
CardView uses view shadows to make each card appear to float above
the background. Examples of these new widgets are shown in the following
screenshots:
The screenshot on the left is an example of
RecyclerView as used in an
email app, and the screenshot on the right is an example of
CardView
as used in a travel reservation app.
RecyclerView
RecyclerView is similar to
ListView, but it is better suited for
large sets of views or lists with elements that change dynamically. Like
ListView, you specify an adapter to access the underlying data
set. However, unlike
ListView, you use a layout manager to position
items within
RecyclerView. The layout manager also takes care of view
recycling; it manages the reuse of item views that are no longer visible
to the user.
When you use a
RecyclerView widget, you must specify a
LayoutManager
and an adapter. As shown in this figure,
LayoutManager is the
intermediary between the adapter and the
RecyclerView:
The following screenshots illustrate a
RecyclerView that contains 100
items (each item consists of an
ImageView and a
TextView):
RecyclerView handles this large data set with ease—scrolling
from the beginning of the list to end of the list in this sample app
takes only a few seconds.
RecyclerView also supports animations;
in fact, animations for adding and removing items are enabled by
default. When an item is added to a
RecyclerView, it fades in as shown
in this sequence of screenshots:
For more about
RecyclerView,
see RecyclerView.
CardView
CardView is a simple view that simulates a floating card with rounded
corners. Because
CardView has built-in view shadows, it provides
an easy way for you to add visual depth to your app. The following
screenshots show three text-oriented examples of
CardView:
Each of the cards in the above example contains a
TextView; the
background color is set via the
cardBackgroundColor attribute.
For more about
CardView,
see CardView.
Enhanced Notifications
The notification system in Android 5.0 has been significantly updated with a new visual format and new features. Notifications have a new look in Android 5.0. For example, notifications in Android 5.0 now use dark text over a light background:
When a large icon is displayed in a notification (as shown in the above example), Android 5.0 presents the small icon as a badge over the large icon.
In Android 5.0, notifications can also appear on the device lockscreen. For example, here is an example screenshot of a lockscreen with a single notification:
Users can double-tap a notification on the lockscreen to unlock the device and jump to the app that originated that notification, or swipe to dismiss the notification. Notifications have a new visibility setting that determines how much content can be displayed on the lockscreen. Users can choose whether to allow sensitive content to be shown in lockscreen notifications.
Android 5.0 introduces a new high-priority notification presentation format called Heads-up. Heads-up notifications slide down from the top of the screen for a few seconds and then retreat back to the notification shade at the top of the screen. Heads-up notifications make it possible for the system UI to put important information in front of the user without disrupting the currently running activity. The following example illustrates a simple Heads-up notification that displays on top of an app:
Heads-up notifications are typically used for the following events:
A new next message
An incoming phone call
Low battery indication
An alarm
Android 5.0 displays a notification in Heads-up format only when it has a high or max priority setting.
In Android 5.0, you can provide notification metadata to help Android sort and display notifications more intelligently. Android 5.0 organizes notifications according to priority, visibility, and category. Notification categories are used to filter which notifications can be presented when the device is in Do not disturb mode.
For detailed information about creating and launching notifications with the latest Android 5.0 features, see Local Notifications.
New APIs
In addition to the new look-and-feel features described above, Android 5.0 adds new APIs that extend the capabilities of existing multimedia, storage, and wireless/connectivity functionality. Also, Android 5.0 includes new APIs that provide support for a new job scheduler feature.
Camera
Android 5.0 provides several new APIs for enhanced camera
capabilities. The new
Android.Hardware.Camera2 namespace includes
functionality for accessing individual camera devices connected to an
Android device. Also,
Android.Hardware.Camera2 models each camera
device as a pipeline: it accepts a capture request, captures the image,
and then outputs the result. This approach makes it possible for apps to
queue multiple capture requests to a camera device.
The following APIs make these new features possible:
CameraManager.GetCameraIdList– Helps you to programmatically access camera devices; you use
CameraManager.OpenCamerato connect to a specific camera device.
CameraCaptureSession– Captures or streams images from the camera device. You implement a
CameraCaptureSession.CaptureListenerinterface to handle new image capture events.
CaptureRequest– Defines capture parameters.
CaptureResult– Provides the results of an image capture operation.
For more about the new camera APIs in Android 5.0, see Media.
Audio Playback
Android 5.0 updates the
AudioTrack class for better audio playback:
ENCODING_PCM_FLOAT– Configures
AudioTrackto accept audio data in floating-point format for better dynamic range, greater headroom, and higher quality (thanks to increased precision). Also, floating-point format helps to avoid audio clipping.
ByteBuffer– You can now supply audio data to
AudioTrackas a byte array.
WRITE_NON_BLOCKING– This option simplifies buffering and multithreading for some apps.
For more about
AudioTrack improvements in Android 5.0,
see Media.
Media Playback Control
Android 5.0 introduces the new
Android.Media.MediaController class,
which replaces
RemoteControlClient.
Android.Media.MediaController
provides simplified transport control APIs and offers thread-safe
control of playback outside of the UI context. The following new APIs
handle transport control:
Android.Media.Session.MediaSession– A media control session that handles multiple controllers. You call
MediaSession.GetSessionTokento request a token that your app uses to interact with the session.
MediaController.TransportControls– Handles transport commands such as Play, Stop, and Skip.
Also, you can use the new
Android.App.Notification.MediaStyle class
to associate a media session with rich notification content (such as
extracting and showing album art).
For more about the new media playback control features in Android 5.0, see Media.
Storage
Android 5.0 updates the Storage Access Framework to make it easier for applications to work with directories and documents:
To select a directory subtree, you can build and send an
Android.Intent.Action.OPEN_DOCUMENT_TREEintent. This intent causes the system to display all provider instances that support subtree selection; the user then browses and selects a directory.
To create and manage new documents or directories anywhere under a subtree, you use the new
CreateDocument,
RenameDocument, and
DeleteDocumentmethods of
DocumentsContract.
To get paths to media directories on all shared storage devices, you call the new
Android.Content.Context.GetExternalMediaDirsmethod.
For more about new storage APIs in Android 5.0, see Storage.
Wireless & Connectivity
Android 5.0 adds the following API enhancements for wireless and connectivity:
New multi-network APIs that make it possible for apps to find and select networks with specific capabilities before making a connection.
Bluetooth broadcasting functionality that enables an Android 5.0 device to act as a low-energy Bluetooth peripheral.
NFC enhancements that make it easier to use near-field communications functionality for sharing data with other devices.
For more about the new wireless and connectivity APIs in Android 5.0, see Wireless and Connectivity.
Job Scheduling
Android 5.0 introduces a new
JobScheduler API that can help users
minimize battery drain by scheduling certain tasks to run only when the
device is plugged in and charging. This job scheduler feature can also
be used for scheduling a task to run when conditions are more suitable
to that task, such as downloading a large file when the device is
connected over a Wi-Fi network instead of a metered network.
For more about the new job scheduling APIs in Android 5.0, see Scheduling Jobs.
Summary
This article provided an overview of important new features in Android 5.0 for Xamarin.Android app developers:
Material Theme
Animations
View shadows and elevation
Color features, such as drawable tinting and prominent color extraction
The new
RecyclerViewand
CardViewwidgets
Notification enhancements
New APIs for camera, audio playback, media control, storage, wireless/connectivity, and job scheduling
If you are new to Xamarin Android development, read Setup and Installation to help you get started with Xamarin.Android. Hello, Android is an excellent introduction for learning how to create Android. | https://docs.mono-android.net/guides/android/platform_features/introduction_to_lollipop/ | 2017-03-23T04:18:36 | CC-MAIN-2017-13 | 1490218186774.43 | [array(['Images/touch-animation.png', None], dtype=object)
array(['Images/xamarin-logo-white.png', None], dtype=object)
array(['Images/drawable-tinting.png', None], dtype=object)
array(['Images/recyclerview-diagram.png', None], dtype=object)
array(['Images/expanded-notification-contracted.png', None], dtype=object)] | docs.mono-android.net |
Vertical Percent Stacked Area Chart
Overview
A Vertical Percent Stacked Area Chart is a multi-series Area Chart that displays the trend of the percentage each value contributes over time or categories. The categories of this chart are spread among the vertical axis.
The concept of stacking in AnyChart is explained in this article: Stacked (Overview).
Quick Start
To build a Vertical Percent Stacked Area Chart, you should create a multi-series Vertical Area Chart and set stackMode() to percent:
// create a chart var chart = chart.verticalArea(); // enable the percent stacking mode chart.yScale().stackMode("percent"); // create area series var series1 = chart.area(seriesData_1); var series2 = chart.area(seriesData_2);
Adjusting
The Vertical Percent Stacked Area series' settings are mostly the same as other series'. The majority of information about adjusting series in AnyChart is given in the General Settings article. | https://docs.anychart.com/Basic_Charts/Stacked/Percent/Vertical_Area_Chart | 2017-03-23T04:17:06 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.anychart.com |
public class BarnesSurfaceInterpolator extends Object
Barnes Surface Interpolation is a surface estimating method commonly used as an interpolation technique for meteorological datasets. The algorithm operates on a regular grid of cells covering a specified extent in the input data space. It computes an initial pass to produce an averaged (smoothed) value for each cell in the grid, based on the cell's proximity to the points in the input observations. Subsequent refinement passes may be performed to improve the surface estimate to better approximate the observed values.
For the first pass, the estimated value at each grid cell is:
Eg = sum(wi * oi) / sum(wi)where
Egis the estimated surface value at the grid cell
wiis the weight value for the i'th observation point (see below for definition)
oiis the value of the i'th observation point
The weight (decay) function used is:
wi = exp(-di2 / L2c )where:
wiis the weight of the i'th observation point value
diis the distance from the grid cell being estimated to the i'th observation point
Lis the length scale, which is determined by the observation spacing and the natural scale of the phenomena being measured. The length scale is in the units of the coordinate system of the data points. It will likely need to be empirically estimated.
cis the convergence factor, which controls how much refinement takes place during each refinement step. In the first pass the convergence is automatically set to 1. For subsequent passes a value in the range 0.2 - 0.3 is usually effective.
Eg' = Eg + sum( wi * (oi - Ei) ) / sum( wi )To optimize performance for large input datasets, it is only necessary to provide the data points which affect the surface interpolation within the specified output extent. In order to avoid "edge effects", the provided data points should be taken from an area somewhat larger than the output extent. The extent of the data area depends on the length scale, convergence factor, and data spacing in a complex way. A reasonable heuristic for determining the size of the query extent is to expand the output extent by a value of 2L.
Since the visual quality and accuracy of the computed surface is lower further from valid observations, the algorithm allows limiting the extent of the computed cells. This is done by using the concept of supported grid cells. Grid cells are supported by the input observations if they are within a specified distance of a specified number of observation points. Grid cells which are not supported are not computed and are output as NO_DATA values.
References
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public static final float DEFAULT_NO_DATA_VALUE
public BarnesSurfaceInterpolator(Coordinate[] observationData)
Coordinatevalues, where the X,Y ordinates are the observation location, and the Z ordinate contains the observation value.
data- the observed data values
public void setPassCount(int passCount)
passCount- the number of estimation passes to perform (1 or more)
public void setLengthScale(double lengthScale)
lengthScale-
public void setConvergenceFactor(double convergenceFactor)
convergenceFactor- the factor determining how much to refine the surface estimate
public void setMaxObservationDistance(double maxObsDistance)
maxObsDistance- the maximum distance from an observation for a supported grid point
public void setMinObservationCount(int minObsCount)
minObsCount- the minimum in-range observation count for supported grid points
public void setNoData(float noDataValue)
noDataValue- the value to use to represent NO_DATA.
public float[][] computeSurface(Envelope srcEnv, int xSize, int ySize)
Envelope. The size of the grid is specified by the cell count for the grid width (X) and height (Y).
srcEnv- the area covered by the grid
xSize- the width of the grid
ySize- the height of the grid | http://docs.geotools.org/latest/javadocs/org/geotools/process/vector/BarnesSurfaceInterpolator.html | 2017-03-23T04:25:04 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.geotools.org |
Hiera: What is Hiera?
Included in Puppet Enterprise 2017.1.. helps you avoid repetition.
You should use Hiera with the roles and profiles method.?”.
Am I going to have to change all my data and config files?.
I use a custom Hiera 3 backend. Can I use Hiera 5?.
Some features are deprecated. When are they getting removed?
Probably in Puppet 6. You have some time.:. | https://docs.puppet.com/puppet/4.9/hiera_intro.html | 2017-03-23T04:22:53 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.puppet.com |
Suite Cross ReferencesSometimes command in a test suite. Any test pages referenced by !see on a suite page will be executed as part of that suite.
The suite page might look like this:
!see .MyProject.MyFeature.IterationOne.StoryOne.TestCaseOne !see .MyProject.MyFeature.IterationTwo.StoryTwelve.TestCaseThirteen ... | http://docs.fitnesse.org/FitNesse.FullReferenceGuide.UserGuide.WritingAcceptanceTests.TestSuites.CrossReferenceSuites | 2017-09-19T18:44:44 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.fitnesse.org |
29. Reliability and the Write-Ahead Log
This chapter explains how the Write-Ahead Log is used to obtain efficient, reliable operation.
29.1. Reliability
Reliability is an important property of any serious database system, and PostgreSQL™™-backed unit (BBU), though ATAPI-6 introduced a drive cache flush command (FLUSH CACHE EXT) that some file systems use, e.g. ZFS, ext4. (The SCSI command SYNCHRONIZE CACHE has long been available.) Many solid-state drives (SSD) also have volatile write-back caches, and many do not honor cache flush commands by default.
To check write caching on Linux™ use hdparm -I; it is enabled if there is a * next to Write cache; hdparm -W to turn off write caching. On FreeBSD™ use atacontrol. (For SCSI disks use sdparm to turn off WCE.) On Solaris™ the disk write cache is controlled by format -e. (The Solaris ZFS file system is safe with disk write-cache enabled because it issues its own disk cache flush commands.) On Windows™ if wal_sync_method is open_datasync (the default), write caching is disabled by unchecking My Computer\Open\{select disk drive}\Properties\Hardware\Properties\Policies\Enable write caching on the disk. Also on Windows, fsync and fsync_writethrough never do write caching.
Many file systems that use write barriers (e.g. ZFS, ext4) internally use FLUSH CACHE EXT or SYNCHRONIZE CACHE commands to flush data to the platters on write-back-enabled drives. Unfortunately, such write barrier file systems behave suboptimally when combined with battery-backed unit (BBU) disk controllers. In such setups, the synchronize command forces all data from the BBU to the disks, eliminating much of the benefit of the BBU. You can run the utility src/tools/fsync in the PostgreSQL source tree to see if you are affected. If you are affected, the performance benefits of the BBU cache can be regained by turning off write barriers in the file system or reconfiguring the disk controller, if that is an option. If write barriers are turned off, make sure the battery remains active;™ periodically writes full page images to permanent WAL storage before modifying the actual page on disk. By doing this, during crash recovery PostgreSQL™ can restore partially-written pages. If you have a battery-backed disk controller or file-system software that prevents partial page writes (e.g., ZFS), you can turn off this page imaging by turning off the full_page_writes parameter. | http://docs.itpug.org/wal.html | 2017-09-19T18:53:21 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.itpug.org |
Advanced Usage¶
Platform-specific components¶
New in version 1.4.
Platform-specific components allow to customize behavior depending on the system or “platform” the target system runs as. Examples:
- Production system on Gentoo, local development on Ubuntu, or
- All VMs on Ubuntu but Oracle is being run with RedHat.
To define a platform specific aspects, you use the platform class decorator. Example:
import batou.component import batou.lib.file class Test(batou.component.Component): def configure(self): self += batou.lib.file.File('base-component') @batou.component.platform('nixos', Test) class TestNixos(batou.component.Component): def configure(self): self += batou.lib.file.File('i-am-nixos') @batou.component.platform('ubuntu', Test) class TestUbuntu(batou.component.Component): def configure(self): self += batou.lib.file.File('i-am-ubuntu')
The platform is then defined in the environment:
[environment] platform = default-platform [host:nixos] # Host specifc override: platform = nixos components = test [host:ubuntu] # Host specifc override: platform = ubuntu components = test
Host-specific data¶
New in version 1.5.
Host-specifc data allows to set environment depentend data for a certain host. It looks like this in an environment configuration:
[host:myhost00] components = test data-alias = nice-alias.for.my.host.example.com
In a component you can access all data attributes via the host’s data dictionary:
def configure(self): alias = self.host.data['alias']
The
data- prefix was chosen in resemblance of the HTML standard.
DNS overrides¶
New in version 1.6
When migrating services automatic DNS lookup of IP addresses to listen on can be cumbersome. You want to deploy the service before the DNS changes become active. This is where DNS overrides can help.
The DNS overrides short circuit the resolving completely for the given host names.
Example:
[environment] ... [resolver] = 3.2.1.4 ::2
Whenever batou configuration (i.e.
batou.utils.Address) looks up it will result in the addresses
3.2.1.4 and
::2.
The overrides support IPv4 and IPv6. You should only set one IP address per type for each host name.
Note
You cannot override the addresses of the configured hosts. The SSH connection will always use genuine name resolving. | http://batou.readthedocs.io/en/latest/user/advanced.html | 2017-09-19T18:38:41 | CC-MAIN-2017-39 | 1505818685993.12 | [] | batou.readthedocs.io |
The whole part of persistent connections is outdated. In version 1.3.0:
«Removed the "persist" option, as all connections are now persistent. It can still be used, but it doesn't affect anything.»
As a matter of fact, this note is also wrong. Using option "persist" will throw an
Uncaught exception 'MongoConnectionException' with message '- Found unknown connection string option 'persist' with value 'x' | http://docs.php.net/manual/fr/mongo.connecting.php | 2015-05-22T09:59:30 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.php.net |
Table of Contents
Product Index
Fantasy IBL Sandy Beach HDRI features eight (8) high resolution environments featuring a pristine sandy beach lapped by the gentle waves of a calm ocean on a clear, sunny day that is perfect for fantasy and fashion renders of all types. Sandy Beach features a camera located on the shore just beyond the tide line and places the sun opposite the ocean to make the lighting perfect for walks on the beach with the ocean in the background. What's more, there isn't a single footprint or tire track anywhere! We provide four (4) distinct camera heights at both mid-morning/mid-afternoon and sunset/sunrise. We include camera presets to perfectly match the camera heights in the environments, which lets you choose the most dramatic and pleasing angle for any of your renders. We have included eight (8) high definition 128 Megapixel HDRI environments (16,384 x 8,192 @ 32-bit) for the absolute most realistic looking environment you can have, but we also include medium resolution (4096 x 2048) and low resolution (1024 x 512) HDRIs, as well. That way you can still use these with low end machines or to increase responsiveness on more capable systems. Best of all, when you are ready to make your masterpiece, you can switch to the best quality environment with a single click before rendering.
Fantasy IBL Sandy Beach HDRI is one of two beaches featured in our Fantasy IBL Beach Bundle Volume 1. The bundle includes both Fantasy IBL Sandy Beach HDRI and Fantasy IBL Rock Arch Beach HDRI. Buy the bundle and get more rendering options for less. | http://docs.daz3d.com/doku.php/public/read_me/index/49879/start | 2021-10-16T00:06:15 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.daz3d.com |
Table of Contents
Product Index
Z Modern Gadgets Props and Poses is the ultimate set to help you stay connected! Call your friend, video call your loved ones, listen to your favorite music, or experience the wonder of virtual reality now!
The Modern Gadgets props and their textures have been created to a high level of detail, so your renders can look fantastic close up as well as far away. Not only is Modern Gadgets easy to use, but it also delivers on versatility.
The Props include a GamePad, Laptop, Phone, Tablet, TV, TV Stand, WebCam, VR Controllers, VR Headsets, SmartWatch, and Wireless Headphones.
The set also includes 40 and the relevant rigged parts posed.
Get Z Modern Gadgets now to stay connected and. | http://docs.daz3d.com/doku.php/public/read_me/index/69965/start | 2021-10-15T23:29:35 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.daz3d.com |
Description Adds DNS views. Parameters Name Description configurationId Location: query Type: integer (int64) The object ID of the parent configuration in which this DNS view is located. Note: Starting in Address Manager API v9.2.0, this parameter is now required. name Location: query Type: string The name of the view. properties Location: query Type: string Adds object properties, including user-defined fields. Responses Code Description 201 Type: number Returns the object ID for the new DNS view. | https://docs.bluecatnetworks.com/r/Address-Manager-API-Guide/POST/v1/addView/9.2.0 | 2021-10-16T00:44:04 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.bluecatnetworks.com |
Accessing CHT Apps
Apps built with the Core Framework runs on most modern computers with the newest versions of Google Chrome or Mozilla Firefox.
Hardware & Software Requirements
Hardware procurement, ownership, and management is the responsibility of each implementing organization. We strongly urge all organizations to procure hardware locally to ensure ease of replacement, repair, sustainability, and hardware support when needed.
Accessing on Desktop
On desktop devices, there is no need to download anything. Simply go to a web browser and type in your unique URL, for example:
{{projectname}}.app.medicmobile.org
Accessing on Mobile
The app also runs with an app on Android phones and tablets. It works best on devices running version 5.1 or later with at least 8 GB of internal memory (16 GB for supervisors) and minimum 1 GB RAM.
Downloading and Launching
To download your app on a mobile device, first navigate to the Google Play Store. From there, click on the search icon in the upper right, and type in the custom name of your health app or project. Make sure the app shown is the correct one and then select it. Then, click on the “Install” button to begin the download.
Once the download is complete, you can access your app via an app icon in your applications menu. Note that the icon, as well as the app name displayed, is customizable by the organization or project.
When accessing your app for the very first time, a login page is displayed. Users enter a username and password that grant access to their customized app experience.
On mobile devices, the app generally stays logged in after initial setup so that CHW users don’t have to type in their credentials each day.
On desktop devices, the user must login again if they close the app tab or browser window.
Users may log out by going to the options menu available in the top right corner of the app.
See Also: Navigating CHT Apps
Magic Links for Logging In
When creating users, the admin has the option to send a user their credentials via SMS using a link. Clicking the link generates a new, random and complex password with a 24-hour expiry. If no gateway is set up, the message may be sent via another messaging app.
By clicking the magic link to log in, the user is able to enter their project’s instance directly, bypassing the need to enter their username and password. If the app is not installed on their phone, it will open in their default browser.
To recover a password, the user needs to contact the admin so that they may regenerate a new magic link and repeat the workflow.
See Also: Remote Onboarding and Training
NoteThe magic link workflow will not work for users who want to use multiple devices or for multiple users on one device.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve. | https://docs.communityhealthtoolkit.org/apps/concepts/access/ | 2021-10-15T23:53:46 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['playstore.png', None], dtype=object)
array(['search-results.png', None], dtype=object)
array(['install.png', None], dtype=object)
array(['siaya.png', None], dtype=object)
array(['login-mobile.png', None], dtype=object)
array(['login-desktop.png', None], dtype=object)
array(['admin.png', None], dtype=object)
array(['link.png', None], dtype=object)
array(['open-with.png', None], dtype=object)
array(['log-in.png', None], dtype=object)] | docs.communityhealthtoolkit.org |
Location
Documentation Home
Palo Alto Networks
Support
Live Community
Knowledge Base
PAN-OS
PAN-OS Web Interface Help
Policies
Policies > Security
Building Blocks in a Security Policy Rule
Document:
PAN-OS Web Interface Help
Building Blocks in a Security Policy Rule
Download PDF
Last Updated:
Thu Oct 07 11:23:39 PDT 2021
Current Version:
10.1
Version 10.1
Version 10.0
Version 9.1
Version 9.0
Version 8.1
Version 8.0 (EoL) of the rules will change as rules are moved. When you filter rules to match specific filters, 1,024.
Source Zone
Source
Add
source.
Source Address
Source
Source Device
Source
Add
the host devices subject to the policy:
any
—Includes any device.
no-hip
—HIP information is not required. This setting enables access from third-party devices that cannot collect or submit HIP information.
quarantine
—Includes any device that is in the quarantine list (
Device
Device Quarantine
).
—Includes selected devices as determined by your configuration. For example, you can add a device object based on model, OS, OS family, or vendor.
Source HIP Profile
Source.
Source Subscriber
Source
Add
one or more source subscribers in a 5G or 4G network using the following formats:
Any
(
5G only
) 5G Subscription Permanent Identifier (SUPI) including IMSI
IMSI (14 or 15 digits)
Range of IMSI values from 11 to 15 digits, separated by a hyphen
IMSI prefix of six digits, with an asterisk (*) as a wildcard after the prefix
EDL that specifies IMSIs
Source Equipment
Add
one or more source equipment IDs in a 5G or 4G network using the following formats:
Any
(
5G only
) 5G Permanent Equipment Identifier (PEI) including International Mobile Equipment Identity (IMEI)
IMEI (11 to 16 digits long)
IMEI prefix of eight digits for Type Allocation Code (TAC)
EDL that specifies IMEIs
Network Slice
Source
Add
one or more source network slices based on network slice service type (SST) in a 5G network, as follows:
Standardized (predefined) SST
eMBB
(enhanced Mobile Broadband)—For faster speeds and high data rates, such as video streaming.
URLLC
(Ultra-Reliable Low-Latency Communications)—For mission-critical applications that are sensitive to latency, such as critical IoT (healthcare, wireless payments, home control, and vehicle communication).
MIoT
(Massive Internet of Things)—For example, smart metering, smart waste management, anti-theft, asset management, and location tracking.
Network Slice SST - Operator-Specific
—You name and specify the slice. The format of the slice name is text followed by a comma (,) and a number (range is 128 to 255). For example, Enterprise Oil2,145..
Destination Device
Add
the host devices subject to the policy:
any
—Includes any device.
quarantine
—Includes any device that is in the quarantine list (
Device
Device Quarantine
).
—Includes selected devices as determined by your configuration. For example, you can add a device object based on model, OS, OS family, or vendor. client and server
, Vulnerability Protection, Anti-Spyware, URL Filtering, File Blocking, Data Filtering, WildFire Analysis,
Mobile Network Protection
, and
SCTP Protection
profiles.
To specify a profile group rather than individual profiles, select the
Profile Type
to be
Group
and then select a
Group Profile
.
To define new profiles or profile groups, click
New
next to the appropriate profile or select
New Group Profile
..
Any (target all devices)
Panorama only
Target
Enable (check) to push the policy rule to all managed firewalls in the device group.
Devices
Panorama only
Select one or more managed firewalls associated with the device group to push the policy rule to.
Panorama only
Add
one or more tags to push the policy rule to managed firewalls in the device group with the specified tag.
Target to all but these specified devices and tags
Panorama only
Enable (check) to push the policy rule to all managed firewalls associated with the device group except for the selected device(s) and tag(s).
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/pan-os/10-1/pan-os-web-interface-help/policies/policies-security/building-blocks-in-a-security-policy-rule.html | 2021-10-15T23:40:22 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.paloaltonetworks.com |
Importing AppSpider scan data
If you use Rapid7 AppSpider to scan your Web applications, you can import AppSpider data with Nexpose scan data and reports. This allows you to view security information about your Web assets side-by-side with your other network assets for more comprehensive assessment and prioritization.
The process involves importing an AppSpider-generated file of scan results, VulnerabilitiesSummary.xml, into a Nexpose site. Afterward, you view and report on that data as you would with data from a Nexpose scan.
If you import the XML file on a recurring basis, you will build a cumulative scan history in Nexpose about the referenced assets. This allows you to track trends related to those assets as you would with any assets scanned in Nexpose.
This import process works with AppSpider versions 6.4.122 or later.
To import AppSpider data, take the following steps:
- Create a site if you want a dedicated site to include AppSpider data exclusively. See Creating and editing sites. Since you are creating the site to contain AppSpider scan results, you do not need to set up scan credentials. You will need to include at least one asset, which is a requirement for creating a site. However, it will not be necessary to scan this asset. If you want to include AppSpider results in an existing site with assets scanned by Nexpose, skip this step.
- Download the VulnerabilitiesSummary.xml file, generated by AppSpider, to the computer that you are using to access the Nexpose Web interface.
- In the Sites table, select the name of the site that you want to use for AppSpider.
- In the Site Summary table for that site, click the hypertext link labeled Import AppSpider Assessment.
- Click the button that appears, labeled Choose File. Find the VulnerabilitiesSummary.xml on your local computer and click Open in Windows Explorer. The file name appears, followed by an Import button.
- Click Import.
The imported data appears in the Assets table on your site page. You can work with imported assets as you would with any scanned by Nexpose: View detailed information about them, tag them, and include them in asset groups, and reports.
Although you can include imported assets in dynamic assets groups, the data about these imported assets is not subject to change with Nexpose scans. Data about imported assets only changes with subsequent imports of AppSpider data. | https://docs.rapid7.com/nexpose/importing-appspider-scan-data/ | 2021-10-15T22:52:05 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['/api/docs/file/product-documentation__master/nexpose/images/s_nx_appspider_click_site.png',
None], dtype=object)
array(['/api/docs/file/product-documentation__master/nexpose/images/s_nx_appspider_import.png',
None], dtype=object)
array(['/api/docs/file/product-documentation__master/nexpose/images/s_nx_appspider_imported_asset.png',
None], dtype=object) ] | docs.rapid7.com |
This article covers the various Azure SQL Database performance metrics displayed by the Performance Analysis Dashboard and Performance Analysis Overview, and how to interpret different metric values and combinations of values across different Azure SQL DB metrics for SQL Sentry.
Note: For Mode: S = Sample and H = History. | https://docs.sentryone.com/help/azure-sql-database-performance-metrics | 2021-10-16T00:22:47 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.sentryone.com |
Relocating the Database or Repository¶
Overview¶
There may come a time where you have to move the Database or Repository (or both) to another location or another machine. This guide will walk you through the steps required.
Migrating the Database¶
These are the steps to move your Database to a new location:
- Shut down all the Slave applications running on your render nodes. You don’t want them making changes during the move.
- Stop the mongod process on the Database machine.
- Copy the Database folder from the original location to the new one.
- Update the config.conf file in the data folder to point to the new system log folder and storage folder locations.
- Start the mongod process on the Database machine.
- Modify the dbConnect.XML file in the settings folder in the Repository to set the new database host name or IP address (if you moved it to another machine).
- Start up the Slaves and ensure that they can connect to the new Database.
Here is an example of how you would update the config.conf file if you copied the new database location was C:\NEW_DATABASE_FOLDER:
systemLog: destination: file path: C:/NEW_DATABASE_FOLDER/data/logs/log.txt quiet: true storage: dbPath: C:/NEW_DATABASE_FOLDER/data
Because the Clients use the dbConnect.xml file in the Repository to determine the database connection settings, you don’t have to reconfigure the Clients to find the new database.
Migrating the Repository¶
These are the steps to move your Repository to a new location:
- Ensure that the share for the new location already exists. Also ensure that the proper permissions have been set.
- Shut down all the Slave applications running on your render nodes. You don’t want them making changes during the move.
- Copy the Repository folder from the original location to the new location.
- Redirect all your Client machines to point to the new Repository location.
- Start up the Slaves and ensure that they can connect to the new Repository location.
- Delete the original Repository (optional).
As an alternative to step (4), you can configure your share name (if the new Repository is on the same machine) or your DNS settings (if the new Repository is on a different machine) so that the new Repository location has the same path as the original. This saves you the hassle of having to reconfigure all of your Client machines. | https://docs.thinkboxsoftware.com/products/deadline/7.1/1_User%20Manual/manual/relocating.html | 2021-10-15T23:35:07 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.thinkboxsoftware.com |
About¶
ahlive is an open-source Python package that makes animating data simple, clean, and enjoyable!
motivation¶
ahlive was developed because the author enjoyed seeing animated plots of data and wanted to make his own ones. There were already several Python packages that had animation capabilities, namely matplotlib, holoviews, easing, bar-chart-race, animaplot, etc.
These packages, unfortunately, didn’t include the author’s desired features, or the features existed but was too verbose/complex to use. In addition, these features either didn’t fit the packages’ development roadmap, required a major internal refactor, or some of the packages were not actively maintained. Thus, ahlive was born.
features¶
Some of ahlive’s notable aspects include:
variety of charts with presets, e.g. bar chart
raceand gridded
scan_x
built-in dynamic annotations, e.g.
state_labelsand
inline_labels
various static annotations, e.g.
titleand
caption
easily accessible keywords, e.g.
figsizeand
xmargins
vectorized and eased (smooth) interpolation, e.g.
interpand
ease
moving x and y axes limits, e.g.
xlimsand
ylims
operators for joining plots, e.g.
*and
+
agile, sensible defaults, e.g. if under 5 frames, use
scatterelse
line
extensible customizability, e.g.
hooks
parallelized output, e.g.
num_workers
requirements¶
ahlive requires the following packages to work:
For specific package versions see requirements.txt.
name origin¶
This package was named “ahlive” as a result of the author’s enthusiasm for puns, and it took the author many long walks to satisfactorily derive the package’s name.
The package name has various meanings:
“ahlive” is a mispelling for “alive” and alive can mean not lifeless, not inanimate, or simply, animate, which happens to be the purpose of this package.
The first two letters “ah” as an interjection can sometimes mean eureka, e.g. “ah, I finally figured it out!” Hopefully, this package can help viewers gain insight from their data.
Additionally, “ah” as an interjection can be an exclamation of joy, e.g. “ah, this is so cool!” Hopefully, this package can bring joy to its viewers too.
Because developing this package was one of the author’s primary pastime during the COVID-19 pandemic, “ahlive” can also be considered a portmanteau, or a blend of two or more words’ meanings. The first two letters “ah” are the author’s initials and the last four letters is “live”: this package helped the author, Andrew Huang, live through the quarantine.
The author has previously considered naming the package “xlive”, “xvideo”, or “xmovie” because it followed the typical naming scheme for xarray-related packages e.g. xesmf, xskillscore, xgcm, etc. However, the author realized that these names might not be ideal if the user searched these keywords in a professional setting. Nonetheless, while “ahlive” was still being developed privately, another Python animation package named “xmovie” was released.
acknowledgements¶
Besides the required packages, the author would like to give a shoutout to:
easing for sparking the idea of lively animations in Python
easing-functions for exemplifying scalar implementations of easing-functions
bar-chart-race for elucidating implementations of bar chart races
holoviews for inspiring much of ahlive’s syntax and ease of use
xskillscore for exhibiting how to integrate CI and how to release
And, to the author’s girlfriend, Shaojie H., for everything she does. | https://ahlive.readthedocs.io/en/main/introductions/about.html | 2021-10-15T23:48:08 | CC-MAIN-2021-43 | 1634323583087.95 | [] | ahlive.readthedocs.io |
SearchFaces
For a given input face ID, searches for matching faces in the collection the face belongs to. You get a face ID when you add a face to the collection using the IndexFaces operation. The operation compares the features of the input face with faces in the specified collection.
You can also search faces without indexing faces by using the
SearchFacesByImage operation.
The operation response returns
an array of faces that match, ordered by similarity score with the highest
similarity first. More specifically, it is an
array of metadata for each face match that is found. Along with the metadata, the
response also
includes a
confidence value for each face match, indicating the confidence
that the specific face matches the input face.
For an example, see Searching for a face using its face ID.
This operation requires permissions to perform the
rekognition:SearchFaces
action.
Request Syntax
{ "CollectionId": "
string", "FaceId": "
string", "FaceMatchThreshold":
number, "MaxFaces":
number}
Request Parameters
The request accepts the following data in JSON format.
- CollectionId
ID of the collection the face belongs to.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 255.
Pattern:
[a-zA-Z0-9_.\-]+
Required: Yes
- FaceId
ID of a face to find matches for in the collection.
Type: String
Pattern:
[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}
Required: Yes
- FaceMatchThreshold
Optional value specifying the minimum confidence in the face match to return. For example, don't return any matches where confidence in matches is less than 70%. The default value is 80%.
Type: Float
Valid Range: Minimum value of 0. Maximum value of 100.
Required: No
- MaxFaces
Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 4096.
Required: No
Response Syntax
{ "FaceMatches": [ { "Face": { "BoundingBox": { "Height": number, "Left": number, "Top": number, "Width": number }, "Confidence": number, "ExternalImageId": "string", "FaceId": "string", "ImageId": "string" }, "Similarity": number } ], "FaceModelVersion": "string", "SearchedFaceId": "string" }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- FaceMatches
An array of faces that matched the input face, along with the confidence in the match.
Type: Array of FaceMatch objects
- FaceModelVersion
Version number of the face detection model associated with the input collection (
CollectionId).
Type: String
- SearchedFaceId
ID of the face that was searched for matches in a collection.
Type: String
Pattern:
[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}
Errors
- AccessDeniedException
You are not authorized to perform the action.
HTTP Status Code: 400
- InternalServerError
Amazon Rekognition experienced a service issue. Try your call again.
HTTP Status Code: 500
- InvalidParameterException
Input parameter violated a constraint. Validate your parameter before calling the API operation again.: | https://docs.aws.amazon.com/rekognition/latest/dg/API_SearchFaces.html | 2021-10-16T00:50:37 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.aws.amazon.com |
This report shows which files are modified (created, updated, or deleted) multiple times during a build.
Jobs involving these files require careful serialization to ensure the file operations sequence is performed in the correct order.
This report requires
--emake-annodetail=file
After you run the report, filter the results:
Type the string you want to filter for in the Filter field.
Use an asterisk to match any number of characters, and use a question mark to match any single character.
You can also use simple regular expressions, for example, *[xz].o and *[x-z].o
Filters are case sensitive.
Press Enter. | https://docs.cloudbees.com/docs/cloudbees-build-acceleration/11.0/user-guide/reports/files-modified-multiple | 2021-10-15T23:54:20 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['../_images/rep-multiple.29b1655.jpg', None], dtype=object)] | docs.cloudbees.com |
Update Collect Forms Remotely
How to do over-the-air updates of forms in Collect
To do over the air Medic Collect form updates via HTTP rather than sending APKs which have a long manual install process, follow the steps below:
- Have your xls forms ready in the folder.
- They should use underscore as name separators. e.g form_name.xlsx
- They should have
form_idand
nameproperties in the settings
- Upload the forms to the instance using
cht-confUsing the
upload-collect-formsaction as shown below.
cht --instance=user:[email protected] upload-collect-forms
- Go to the Collect App. Delete All forms then go to
Get Blank Formand select all the forms you need.
Troubleshooting
When you go to
Get Blank Forms and instead of getting a list of the forms available, you get a pop-up error which has a portion of this message instead
...OpenRosa Version 1.0 standard: Forms list entry 1 is missing one or more tags: formId, name or downloadUrl
This means you probably uploaded a XLS file without a
name or
form_id property. To find out which form is missing that, use this command:
curl -vvvv -H "x-openrosa-version: 1"
Should bring a list like this one
Go through the list and see which form has a missing
<name> or
<formID> property. Add it and reupload the forms using
cht-conf again.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.
Last modified 16.07.2021: Update references from medic-conf to cht-conf (#529) (8879b50) | https://docs.communityhealthtoolkit.org/apps/guides/updates/collect-forms-update/ | 2021-10-15T22:49:29 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['xform_name_settings.png', 'Name property'], dtype=object)
array(['xform_list.png', 'Xform List'], dtype=object)] | docs.communityhealthtoolkit.org |
Pub/Sub without CloudEvents
介绍
Dapr uses CloudEvents to provide additional context to the event payload, enabling features like:
- 追踪
- Deduplication by message Id
- Content-type for proper deserialization of event’s data
For more information about CloudEvents, read the CloudEvents specification.
When adding Dapr to your application, some services may still need to communicate via raw pub/sub messages not encapsulated in CloudEvents. This may be for compatibility reasons, or because some apps are not using Dapr. Dapr enables apps to publish and subscribe to raw events that are not wrapped in a CloudEvent.
WarningNot using CloudEvents disables support for tracing, event deduplication per messageId, content-type metadata, and any other features built using the CloudEvent schema.
Publishing raw messages
Dapr apps are able to publish raw events to pub/sub topics without CloudEvent encapsulation, for compatibility with non-Dapr apps.
To disable CloudEvent wrapping, set the
rawPayload metadata to
true as part of the publishing request. This allows subscribers to receive these messages without having to parse the CloudEvent schema.
curl -X "POST" -H "Content-Type: application/json" -d '{"order-number": "345"}'
from dapr.clients import DaprClient with DaprClient() as d: req_data = { 'order-number': '345' } # Create a typed message with content type and body resp = d.publish_event( pubsub_name='pubsub', topic='TOPIC_A', data=json.dumps(req_data), metadata=( ('rawPayload', 'true') ) ) # Print the request print(req_data, flush=True)
<?php require_once __DIR__.'/vendor/autoload.php'; $app = \Dapr\App::create(); $app->run(function(\DI\FactoryInterface $factory) { $publisher = $factory->make(\Dapr\PubSub\Publish::class, ['pubsub' => 'pubsub']); $publisher->topic('TOPIC_A')->publish('data', ['rawPayload' => 'true']); });
Subscribing to raw messages
Dapr apps are also able to subscribe to raw events coming from existing pub/sub topics that do not use CloudEvent encapsulation.
Programmatically subscribe to raw events
When subscribing programmatically, add the additional metadata entry for
rawPayload so the Dapr sidecar automatically wraps the payloads into a CloudEvent that is compatible with current Dapr SDKs.
import flask from flask import request, jsonify from flask_cors import CORS import json import sys app = flask.Flask(__name__) CORS(app) @app.route('/dapr/subscribe', methods=['GET']) def subscribe(): subscriptions = [{'pubsubname': 'pubsub', 'topic': 'deathStarStatus', 'route': 'dsstatus', 'metadata': { 'rawPayload': 'true', } }] return jsonify(subscriptions) @app.route('/dsstatus', methods=['POST']) def ds_subscriber(): print(request.json, flush=True) return json.dumps({'success':True}), 200, {'ContentType':'application/json'} app.run()
<?php require_once __DIR__.'/vendor/autoload.php'; $app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions(['dapr.subscriptions' => [ new \Dapr\PubSub\Subscription(pubsubname: 'pubsub', topic: 'deathStarStatus', route: '/dsstatus', metadata: [ 'rawPayload' => 'true'] ), ]])); $app->post('/dsstatus', function( #[\Dapr\Attributes\FromBody] \Dapr\PubSub\CloudEvent $cloudEvent, \Psr\Log\LoggerInterface $logger ) { $logger->alert('Received event: {event}', ['event' => $cloudEvent]); return ['status' => 'SUCCESS']; } ); $app->start();
Declaratively subscribe to raw events
Subscription Custom Resources Definitions (CRDs) do not currently contain metadata attributes (issue #3225). At this time subscribing to raw events can only be done through programmatic subscriptions.
相关链接
- Learn more about how to publish and subscribe
- List of pub/sub components
- Read the API reference
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve. | https://docs.dapr.io/zh-hans/developing-applications/building-blocks/pubsub/pubsub-raw/ | 2021-10-16T00:14:59 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['/images/pubsub_publish_raw.png',
'Diagram showing how to publish with Dapr when subscriber does not use Dapr or CloudEvent'],
dtype=object)
array(['/images/pubsub_subscribe_raw.png',
'Diagram showing how to subscribe with Dapr when publisher does not use Dapr or CloudEvent'],
dtype=object) ] | docs.dapr.io |
Create a Sales Receipt in Quickbooks (Zapier)
Use Zapier to Create a Sales Receipt in Quickbooks
Note: if you haven’t set up a zapier account for your MemberPress, then follow this guide first.
First click the “Make a Zap!” button on the top right of your Zapier page.
Choose MemberPress as your Trigger App.
Select “Transaction Completed” as your trigger and continue.
Select your MemberPress account (you can test its connection here if you wish) and continue
Pick a Sample transaction to test your Zap. If you don’t see any then just hit the “Get More Samples” button to get a transaction from your site, it is best to get one with as much information as possible so you know what to add to your zap. Continue.
Now click the “Add a Step” button on the left below your MemberPress Step
Choose Action/Search
Choose QuickBooks Online (if you don’t see it you will need to add and link the app to Zapier)
You need to find them to give them a receipt, and if they are not in QuickBooks yet, which is likely if you are just starting this, then it will add them as a customer as well.
Connect or Select your QuickBooks Online account, continue.
Here is where you need to search for your customer, choose "Email" in the search field
For the Search Value you need to click the "insert a field" button.Then it should populate with all your MemberPress Fields, here you need to select the "Member Email" Field
Next, for QuickBooks to create a customer if there isn't already one in your system click the "Create QuickBooks Online Customer if it doesn't exist yet?" boxThen using the "insert a field" button find the "Member First Name" and the "Member Last Name". (You will need to collect both first and last names in your membership registration as it is required by QuickBooks to create an account.) Choose the "Member First Name" then a space to avoid them being lumped together, hit the "insert a field" button again and find the "Member Last Name"
This is all that is required to create a user, but feel free to add as many fields as you want by following the same steps. Once you have collected all the info, hit Continue.
The Next page will let you test your setup .Simply hit the "Fetch & Continue" at the bottom of the page to try it out, if successful you will see this screen.Continue onto the final action
Click the "+ Add a Step" button again as before and Choose the "action" then choose "QuickBooks Online" but this time you will choose "create a sales receipt"
alt="">Connect your QuickBooks account again and continue.
Find the customer you created in step 2 by selecting "use a custom value" in the customer dropdown
In the "custom value for customer ID" choose the quickbooks "find or create a customer" then choose "ID" in the dropdown
You can leave the "find customer by name/email" blank
select the "insert a field" button in the "email" options
select the "1 transaction completed" then select the "Member Email" option
fill out any other optional fields you choose, then navigate to the "line items" section
following the same steps as before, select the "amount " line from the first step, this one is required so do not skip it
then find and add any other information you want to include in the receipt
after you have added your information, click continue.
Next you will test your Zap by clicking the "Send Test to QuickBooks Online"
alt="">
If all goes well you will see this message
Finally select "Finish" and you should see a receipt like this in your QuickBooks | https://docs.memberpress.com/article/273-use-zapier-to-create-a-sales-receipt-in-quickbooks | 2021-10-15T22:48:28 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/5cf1501204286333a264071e/file-B8KsXdp5El.png',
None], dtype=object) ] | docs.memberpress.com |
Installing & Configuring our Divi Add-on
This documentation will show you how to install and use the Divi add-on for MemberPress which is available on all MemberPress plans.
Installation:
To install the Divi add-on, you can go to MemberPress menu and click on the green Add-ons link.
Next you will scroll down and find the Divi add-on. Then click on the Install Add-on button. This will automatically install and activate the add-on.
Note: You will also need to have Divi installed to be able to use the MemberPress Divi Add-on.
Configuration:
Once you have both the MemberPress Divi Content Protection Add-on installed and Divi installed and activated as your theme, you can go to the page, post, etc. you would like to protect content on and launch Divi:
Once you have added the content to a page, post, etc. you will want to create a rule in MemberPress > Rules page so that you can protect it: Rules Overview.
After creating a rule, you can go back to the content and click on the gear icon on a row and then click on the MemberPress tab:
>. | https://docs.memberpress.com/article/322-installing-configuring-our-divi-add-on | 2021-10-15T23:30:08 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/5f0369d004286306f8063f0e/file-hgnm9fYB7P.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/5f88d1d84cedfd0017dd2479/file-0tKW8Ard8Y.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/5f88d40ecff47e001a58ef7e/file-VYdHMik6ac.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/5f88db2acff47e001a58ef8f/file-Gy4AVt7TeM.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/5f88dbaf4cedfd0017dd249b/file-mGf5OZKgR7.png',
None], dtype=object) ] | docs.memberpress.com |
Program settings¶
In the “Program Settings”, you can set an autorunning program, upload a Python program and display a program.
Program Autorun¶
The “PROGRAM AUTORUN” functionality allows you to execute a set of sequences, directly by pressing the top button of the robot.
Here is how it works:
Program a sequence with Blockly.
Save the sequence on Ned.
Add the sequence to the sequence autorun.
Now trigger the sequence with the top button, without having to open Niryo Studio.
Hint
This is very useful when you use Ned in an exhibition event or for a demo. You will just need to power on Ned, wait for the LED to turn blue or green, and then press the button to run a given set of programs (that you previously programmed).
Display the default program triggered with the button
Select the program
Choose the “PROGRAM AUTORUN” mode. You have choices:
ONE SHOT: when you press and release the top button, the given set of sequences will be run once. After that, Ned will go back to a “resting position” and activate the “Learning Mode”. You can start another run by pressing the button again. If you press the button while the set of sequences is running, it will stop the execution.
LOOP: when you press and release the top button, the selected program will run in a loop, forever. You can press the button again to stop the execution.
SELECT button: select the program autorun.
Programs list¶
You can pick a previously saved program from the select box. Click on a sequence to see all the properties and actions available for this program.
You can display the Python code of the program.
After the program details, you get a series of available actions:
Play the selected program (same as the “PLAY” button in Niryo blocks).
Stop the current program execution (same as the “STOP” button in Niryo blocks).
Open the program in Niryo blocks. This will make you switch to the “Niryo blocks” panel, and the program will be added to the current workspace.
Hint
This functionality is very useful when you want to duplicate and create a new program from an existing one.
Edit the program. You can modify the name, the description, and the Blockly XML.
Delete the program. | https://docs.niryo.com/product/niryo-studio/v3.2.1/en/source/programs.html | 2021-10-15T23:52:46 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['../_images/screen_niryostudio_program_settings.png',
'../_images/screen_niryostudio_program_settings.png'], dtype=object)
array(['../_images/screen_niryostudio_autorun_program.png',
'../_images/screen_niryostudio_autorun_program.png'], dtype=object)
array(['../_images/screen_niryostudio_last_python_program.png',
'../_images/screen_niryostudio_last_python_program.png'],
dtype=object) ] | docs.niryo.com |
Monitoring and observing Vector
Use logs and metrics generated by Vector itself in your Vector topology
Although Vector is primarily used to handle observability data from from a wide variety of sources, we also strive to
make Vector highly observable itself. To that end, Vector provides two sources,
internal_logs and
internal_metrics, that you can use to handle logs and metrics produced by Vector just like you
would logs and metrics from any other source.
Logs
Vector provides clear, informative, well-structured logs via the
internal_logs source. This section
shows you how to use them in your Vector topology.
Which logs Vector pipes through the
internal_logs source is determined by the log level, which defaults
to
info.
In addition to the
internal_logs source, Vector also writes its logs to [
stderr][], which can be captured by
Kubernetes, SystemD, or however you are running Vector.
Accessing logs
You can access Vector’s logs by adding an
internal_logs source to your topology. Here’s an example
configuration that takes Vector’s logs and pipes them to the console as plain text:
[sources.vector_logs] type = "internal_logs" [sinks.console] type = "console" inputs = ["vector_logs"]
Using Vector logs
Once Vector logs enter your topology through the
internal_logs source, you can treat them like logs from any other
system, i.e. you can transform them and send them off to any number of sinks. The configuration below, for example,
transforms Vector’s logs using the
remap transform and Vector Remap Language and then stores those
logs in Clickhouse:
[sources.vector_logs] type = "internal_logs" [transforms.modify] type = "remap" inputs = ["vector_logs"] # Reformat the timestamp to Unix time source = ''' .timestamp = to_unix_timestamp!(to_timestamp!(.timestamp)) ''' [sinks.database] type = "clickhouse" inputs = ["modify"] host = "" table = "vector-log-data"
Configuring logs
Levels
Vector logs at the
info level by default. You can set a different level when starting up your instance using either
command-line flags or the
LOG environment variable. The table below details these options:
Stack traces
You can enable full error backtraces by setting the
RUST_BACKTRACE=full environment variable. More on this in the
Troubleshooting guide. You can
Metrics
You can monitor metrics produced by Vector using the
internal_metrics source. As with Vector’s
internal logs, you can configure an
internal_metrics source and use the piped-in metrics
however you wish. Here’s an example configuration that
Metrics catalogue
The table below provides a list of internal metrics provided by Vector. See the docs for the
internal_metrics
source for more detailed information about the available metrics.
Troubleshooting
More information in our troubleshooting guide:
How it works
Event-driven observability
Vector employs an event-driven observability strategy that ensures consistent and correlated telemetry data. You can read more about our approach in RFC 2064.
Log rate limiting
Vector rate limits log events in the hot path. This enables you to get granular insight without the risk of saturating IO and disrupting the service. The trade-off is that repetitive logs aren’t logged. | https://docs.vector.dev/docs/administration/monitoring/ | 2021-10-15T22:37:25 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.vector.dev |
Any Appian feature that contains "[Deprecated]" in its title, or has language declaring that it is "deprecated," is a feature that will be removed in a later release of Appian.
These features will continue to work up until they are removed, but Appian encourages users to replace deprecated features in their applications ahead of their removal.
If you have any questions or concerns related to a deprecated feature, contact Support.
For more information specifically about Application Portal and non-SAIL interfaces, see Application Portal Support.
The following table lists the documentation content related to features and functionality currently in the deprecated status, and the Appian release version it was placed into that status.
The following table only includes known deprecations up to Appian 20.3. Navigate to the latest version of the docs to see what's been deprecated since the 20.3 release.
On This Page | https://docs.appian.com/suite/help/20.3/Deprecated_Features.html | 2021-10-15T23:42:46 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.appian.com |
Overview of Lifecycle Manager
An introduction to Lifecycle Manager (LCM) for provisioning DataStax Enterprise (DSE) clusters and centrally managing configurations. Simplify deploying and configuring DataStax Enterprise clusters with Lifecycle Manager.
Lifecycle Manager (LCM) is a powerful provisioning and configuration management system designed for ease of use with DataStax Enterprise (DSE) clusters. Graphical workflows enable efficient installation and configuration of DSE, empowering your organization to effectively manage DSE clusters without requiring extensive platform expertise.
- Efficiently monitor and prevent configuration drift by defining configuration profiles that apply to the cluster, datacenter, or node level.
- Enforce uniform configurations that adhere to the desired baseline configurations for the workload of each datacenter.
- Securely store credentials for automating access to machines and package repositories without the need to repeatedly enter credentials during installation and configuration jobs.
- Monitor job status with unprecedented access and deep transparency into each recorded and timestamped step of the deploy process.
- Drill into job details to troubleshoot installing and configuring jobs from the convenience of the Jobs workspace without the immediate need to scour various logs for information.
Getting started in three minutes
View the following video to learn how to create a three-node cluster using LCM in just three minutes! After watching the video, follow the procedures for Installing a DataStax Enterprise cluster using Lifecycle Manager to create your own DSE clusters. | https://docs.datastax.com/en/opscenter/6.5/opsc/LCM/opscLCMOverview.html | 2021-10-15T22:50:04 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.datastax.com |
Add SOP
Summary[edit]
The Add SOP can both create new Points and Polygons on its own, or it can be used to add Points and Polygons to an existing input.
If an input is specified, this SOP adds points and polygons to it as specified below. If no input is specified, then it generates the points and polygons below as a new entity. It can read points and vertices from DATs. See also DAT to SOP.
Parameters - Points Page
Points DAT
pointdat - Path to a Table DAT containing point data. By default, x, y, z, and w can be defined in the first 4 columns of the table using un-named columns.
If the
Named Attributes parameter below is turned on, the following attributes can be defined in the Points Table DAT using named columns:
P(0) P(1) P(2) P(3)
N(0) N(1) N(2)
Cd(0) Cd(1) Cd(2) Cd(3)
uv(0) uv(1) uv(2)
Any other columns are added as single-float attributes.
NOTE: Turn off
Compute Normals on the Polygon parameter page when supplying
N(0) N(1) N(2) in the Points Table DAT.
Named Attributes
namedattribs - Allows extra attributes to be defined in the Point Table DAT above.
Delete Geometry, Keep Points
keep - Use this option to remove any unused points. When checked, existing geometry in the input are discarded, but the polygons created by this SOP are kept, as well as any points in the input.
Add Points
addpts - When On you can add individual points with position and weight of your choosing by using the parameters below.
Position 0
pos0 - ⊞ - The three input fields represent the X, Y and Z coordinates of the point. These values can be constants (numbers) or variables. Below are three examples:
0.2 0.42 1.3
0.2 op('xform1').par.tx 1.36
# read the sixth point (first point is 0) from the SOP, grid1 op('grid1').points[5].x op('grid1').points[5].y op('grid1').points[5].z
- Position 0
pos0x-
- Position 0
pos0y-
- Position 0
pos0z-
Weight 0
weight0 -.
Parameters - Polygons Page
Method
method - ⊞ - Specify to create polygons from the points by using a Group method or Pattern Method.
- By Group
group- Create as many polygons as determined by the group field and by the grouping / skipping rules.
- By Pattern
pattern- Specify the points to use to create polygons using the parameters Polygon Table or Polygon 0 below.
Group
group - Subset of points to be connected.
Add
add - ⊞ - Optionally join subgroups of points.
- All Points
all- Adds all points just as if you added them manually in the Points page.
- Groups of N Points
group- Adds only the number of points specified.
- Skip Every Nth Point
skip- Adds points, buts skips every Nth one.
- Each Group Separately
sep- Creates separate polygons for each group specified in the
Groupparameter. For example, if you have a Group SOP creating a group called group1 and using the
Create Boundary Groupsoption, you can connect this to an Add SOP and enter group1__* in the
Groupparameter. If
Each Group Separatelyis chosen, polygons will be created for each boundary on the surface.
Tip: The Each Group Separately option is useful when pasting surfaces. Boundary groups can be created for the boundaries of two adjacent surfaces, and then the PolyLoft SOP (using the Points option) can be used to stitch these surfaces together.
N
inc - Increment / skip amount to use for adding points.
Closed
closedall - Closes the generated polygons.
Polygons Table
polydat - Path to a Table DAT containing polygon data. Accepts rows of polygons specified by point number in the first column. The second column indicates if the polygons are closed (1) or open (0).
Polygon 0
prim0 - Create a fixed number of polygons by specifying a point pattern for each polygon. Enter connection lists here to add polygons. These consist of a list of point numbers to define the order in which the points are to be connected. The form is: {from}-{to}[:{every}][,{of}].
Examples of Valid Connection Lists:
1 2 3 4- Makes a polygon by connecting point numbers 1,2,3,4.
1 3-15 16 8- All points from 3-15 are included.
1-234 820-410 235-409- Points from 1-820 are included, in the specified order.
0-15:2- Every other point from 0 to 15 is included.
0-15:2,3- Every 2 of 3 points are included (i.e. 0, 1, 3, 4, 6, 7, 9, 10, 12, 13, 15).
!4- Every point except 4 is included.
!100-200- Every point <100 and >200 is included.
*- Include all points.
9-0- The first ten points are included in reverse order.
!9-0- All but the first ten points are included in reverse order.
Closed 0
closed0 - To create a closed polygon, check the Closed button.
Parameters - Post Page
Remove Unused Points
remove - Keep only the connected points, and discard unused points.
Compute Normals
normals - Creates normals on the geometry.
Uses
Used in conjunction with a point expression, the Add SOP can be useful for extracting a specific point from another SOP. For example, to extract the X, Y and Z value of the fifth point, from a Grid SOP in geo1:
op('geo1/grid1').points[5].x op('geo1/grid1').points[5].y op('geo1/grid1').points[5].z
Points added in this way are appended to the end of the point list if a Source is specified. Middle-mouse click on the SOP node to find out how many points there are. For example, if you have added two points and there are 347 points (from 0 to 346), you have added the last two point numbers: 345 and 346.
Operator Inputs
- Input 0 -
TouchDesigner Build: | https://docs.derivative.ca/Add_SOP | 2021-10-15T22:34:51 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.derivative.ca |
Date: Tue, 30 Sep 2014 15:28:01 +0200 From: Polytropon <[email protected]> To: Sandeep Gangadharan1 <[email protected]> Cc: [email protected] Subject: Re: Bash Shellshock Bug Message-ID: <[email protected]> In-Reply-To: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Tue, 30 Sep 2014 20:10:47 +1100, Sandeep Gangadharan1 wrote: > Hi Team > > Is there anyway to get patch for my FreeBSD 6.2 . If I enable internet > connection is it possible. It is possible, but not trivial. The thread mentioned should give you an impression of the _different_ ways that exist to perform the bash update. However, _you_ need to decide which approach you want to try. Keep in mind either way might work, or might not (simply because FreeBSD 6 isn't supported anymore, and things have changed, especially the package management and the ports infra- structure). > I cannot upgrade the version right now. That should not be the primary problem, even though you should consider upgrading to a supported FreeBSD version (v9 and v10 currently). :-) > Can you please share the method of upgrading the package. Read the mentioned ways before you start. This is not a "follow step by step" procedure - it involves decisions and is a little bit of "trial & error". ;-) I'll simply quote parts from the discussion thread, if this is okay for you. Still you need to check which works for you. Make a backup (!) first (at least of the installed bash package). Before you do _anything_ to your current ports tree, do this: # cd /usr/ports/shells/bash # make package This will make a backup package in /usr/ports/packages of your _current_ bash (the _working_ one), in case anything should go wrong. You can then later on re-install bash with # pkg_add /usr/ports/packages/All/bash-x.y.z.tbz (where x.y.z reflects the version number of bash prior to your upgrade attempts). If you can still access FreeBSD 6 packages (note that you might point $PACKAGESITE at the _archives_ section of the FreeBSD FTP server; see "man pkg_add" for details. # setenv PACKAGESITE= # pkg_delete -f /var/db/pkg/bash-x.y.z # pkg_add -r bash Use the [Tab] key to autocomplete the correct version number in the 2nd command. Probably that won't work; bash-3.2.25 seems the last version available here. So you'll probably have to build from source. That might be a problem due to the architectural difference between FreeBSD 6 and the current build system... so the "obvious" # pkg upgrade bash doesn't work for you, because FreeBSD 6 doesn't have pkgng yet. Again note: Make a backup (!) of your current /usr/ports tree before you start! Updating the ports tree is possible, but probably you don't even have portsnap on FreeBSD 6 yet. I'm not sure when it has been introduced, but I assume it was somewhere betweeen FreeBSD 7 and 8... This is how you would do it: # portsnap fetch update # cd /usr/ports/shells/bash # make deinstall # make # make reinstall If you have any other means to update your ports tree (CVS was the standard at FreeBSD 6, I don't know if this is still supported, as FreeBSD now uses Subversion), you could also try the equivalent with binary packages: # portsnap fetch update # portupgrade -P bash or # portsnap fetch update # portmaster -P shells/bash depending on your use of a port management tool. Omit -P to try to build from source. I'd like to emphasize the advice I've provided in the thread mentioned, after being informed that building _with_ the ports tree will probably be problematic: '.'' This is probably the "easiest" way to try. You don't mess up things with your ports collection here. Again, please note: Make a backup copy of your working bash version, as I said above. You can find the full source here: Take your time to read, and to think about the problem. I'm sure you'll be successful once you've figured out which way works for you. Also note that I haven't tested _anything_ of the methods mentioned here, so I can't promise they'll work. -- Polytropon Magdeburg, Germany Happy FreeBSD user since 4.0 Andra moi ennepe, Mousa, ...
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=281752+0+/usr/local/www/mailindex/archive/2014/freebsd-questions/20141005.freebsd-questions | 2021-10-16T00:12:00 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.freebsd.org |
Tests¶
JavaScript applications can be tested with tests written in JavaScript.
The JavaScript test files must be located in the project folder
src/test/js.
All JavaScript files (
*.js) found in this folder, at any level, are considered as test files.
In order to setup JavaScript tests for your application, follow these steps:
- create an Add-On Library project or a Standalone Application project
- define the following properties in the module.ivy file of the project inside the
ea:buildtag (if the properties already exist, replace them):
<ea:property <ea:property
- add the MicroEJ JavaScript dependency in the module.ivy file of the project:
<dependency org="com.microej.library.runtime" name="js" rev="0.10.0"/>
- define the platform to use to run the tests with one of the options described in Platform Selection section
- create a file
assert.jsin the folder
src/test/resourceswith the following content:
var assertionCount = 0; function assert(value) { assertionCount++; if (value == 0) { print("assert " + assertionCount + " - FAILED"); } else { print("assert " + assertionCount + " - PASSED"); } }
This method
assert will be available in all tests to do assertions.
- create a file
test.jsin the folder
src/test/jsand write your first test:
var a = 5; var b = 3; var sum = a + b; assert(sum === 8);
The execution of the tests produces a report available in the folder
target~/test/html for the project. | https://docs.microej.com/en/latest/ApplicationDeveloperGuide/js/tests.html | 2021-10-16T00:00:38 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.microej.com |
If you are running ONTAP 9.6 or later, you can set up Alibaba Cloud Object Storage as the cloud tier for FabricPool.
Doing so enables ONTAP to access the data in Alibaba Cloud Object Storage without interruption.
storage aggregate object-store config create my_ali_oss_store_1 -provider-type AliCloud -server oss-us-east-1.aliyuncs.com -container-name my-ali-oss-bucket -access-key DXJRXHPXHYXA9X31X3JX | http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-mgng-stor-tier-fp/GUID-803C38BF-BE9D-4069-80F1-69D7CD203FF5.html | 2021-10-15T23:15:37 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.netapp.com |
AWS Encryption SDK for Python
This topic explains how to install and use the AWS Encryption SDK for Python. For
details about
programming with the AWS Encryption SDK for Python, see the aws-encryption-sdk-python
Prerequisites
Before you install the AWS Encryption SDK for Python, be sure you have the following prerequisites.
- A supported version of Python
Python 3.5 or later is required by the AWS Encryption SDK for Python versions 3.0.x and later. To download Python, see Python downloads
.
Earlier versions of the AWS Encryption SDK for Python support Python 2.7 and Python 3.4, but we recommend that you use the latest version of the AWS Encryption SDK.
- The pip installation tool for Python
pip is included in Python 3.5 or later, although you might want to upgrade it. For more information about upgrading or installing pip, see Installation
in the pip documentation.
Installation
Use pip to install the AWS Encryption SDK for Python, as shown in the following examples.
- To install the latest version
pip install aws-encryption-sdk
For more details about using pip to install and upgrade packages, see Installing
Packages
The SDK requires the cryptography
library
pip install and build the
cryptography library on Windows.
pip 8.1 and
later installs and builds cryptography on Linux. If you are
using an earlier version of
pip and your Linux environment doesn't have the tools
needed to build the cryptography library, you need to install
them. For more information, see Building
Cryptography on Linux
For the latest development version of this SDK, go to the aws-encryption-sdk-python GitHub
repository
After you install the SDK, get started by looking at the example Python code in this guide. | https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/python.html | 2021-10-16T01:23:22 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.aws.amazon.com |
dask.array.dstack¶
- dask.array.dstack(tup, allow_unknown_chunksizes=False)[source]¶
Stack arrays in sequence depth wise (along third axis).
This docstring was copied from numpy.dstack.
Some inconsistencies with the Dask version may exist..
- Parameters
- tupsequence of arrays
The arrays must have the same shape along all but the third axis. 1-D or 2-D arrays must have the same shape.
- Returns
- stackedndarray
The array formed by stacking the given arrays, will be at least 3-D.
See also
concatenate
Join a sequence of arrays along an existing axis.
stack
Join a sequence of arrays along a new axis.
block
Assemble an nd-array from nested lists of blocks.
vstack
Stack arrays in sequence vertically (row wise).
hstack
Stack arrays in sequence horizontally (column wise).
column_stack
Stack 1-D arrays as columns into a 2-D array.
dsplit
Split array along third axis.]]]) | https://docs.dask.org/en/latest/generated/dask.array.dstack.html | 2021-10-16T00:13:19 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.dask.org |
How to install SSL certificate on Node Server?
Product edition: On-PremisesProduct edition: On-Premises
Overview
Before you install the certificate on inSync Storage Node Server, you need to create the correct formatted certificate which will work with inSync.
Refer the section on How to setup and install a Trusted Certificate from a Certification Authority (CA) to create the correct formatted certificate.
Install the certificate on inSync Storage Node Server
There are two methods to install the certificate on inSync Storage Node Server:
Manually import the certificate on inSync Storage Node Server
- Stop the inSync services and wait for the processes to end on the Task Manager.
- Rename the existing inSyncServerSSL.key, present in C:\ProgramData\Druva\inSyncCloud\inSyncServer4\ to inSyncServerSSL.key.old and copy the New inSyncServerSSL.key generated, as per the SSL article.
- Start all the inSync services and check if the certificates have been loaded correctly.
Import the certificate from the admin console
- On inSync Management Console, go to Manage > Storage List > Storage Nodes.
- Select the storage node on which you want to install the certificate.
- Click Edit under the Edge Server tab.
- Click the folder icon for SSL certificate on the Edge Server page to upload the certificate.
Do not use the certificate provided by the CA directly on the admin console. | https://docs.druva.com/Knowledge_Base/inSync/How_To/How_to_install_SSL_certificate_on_Node_Server%3F | 2021-10-16T00:16:08 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.druva.com |
Joints_interface¶
This package handles packages related to the robot’s joints controller.
It provides an interface to ros_control.
Joints interface node¶
- The ROS Node is made to:
- Interface robot’s motors to joint trajectory controller, from ros_control package.
- Create a controller manager, from controller_manager package, to provides the infrastructure to load, unload, start and stop controllers.
- Interface with motors calibration.
- Initialize motors parameters. | https://docs.niryo.com/dev/ros/v3.1.2/en/source/stack_hardware/joints_interface.html | 2021-10-16T00:46:32 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.niryo.com |
Task Factory users running version 2020.1.4 or older (released prior to May 27, 2020): There's an important Task Factory update. Please visit here for more details.
Note: Task Factory components can be used with Azure databases. As of the 2018.2.3 release, Task Factory can also be used with Azure Data Factory.
Connection Manager
Note: Azure Storage connection manager is available for SQL versions 2012 and higher.
Azure Storage Connection Manager
The Azure Storage Connection Manager is used to connect to an Azure Machine Learning blob storage.
Azure Rest Connection Manager
Azure Rest Connection Manager
Connection Properties
Used with the Azure Rest Source and Azure Rest Destination.
Proxy Configuration
Azure ML Batch Execution Task
Note: Azure ML Batch Execution is available for SQL versions 2012 and higher.
Azure ML Batch Execution Task
Azure ML Source
Note: Azure ML is available for SQL versions 2012 and higher.
Azure ML Source
Azure Rest Source
Azure.
Azure ML Destination
Note: Azure ML is available for SQL versions 2012 and higher.
Azure ML Destination
Azure Rest Destination
Azure Rest Destination
Target
Begin by creating a connection manager that connects to an Azure Storage container. After a connection manager is created, the source window populates with files and folders. Select the desired file to continue configuration.
Delimited Format
Json Array Format
XML Array Format
| https://docs.sentryone.com/help/task-factory-azure | 2021-10-16T00:48:04 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf46ed8ad121cfb0b488220/n/task-factory-azure-rest-connection-manager-2018.png',
'Task Factory Azure Rest Connection Manager Connection Properties Version 2018 Task Factory Azure Rest Connection Manager Connection Properties'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf46f0a8e121cbf50ec4338/n/task-factory-azure-rest-connection-manager-proxy-2018.png',
'Task Factory Azure Rest Connection Manager Proxy Configuration Version 2018 Task Factory Azure Rest Connection Manager Proxy Configuration'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bec85edec161c730afc646c/n/tf-azure-ml-batch-execution-2018.png',
'Task Factory Azure ML Batch Execution Version 2018 Task Factory Azure ML Batch Execution'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bec7b688e121c6615caf202/n/tf-azure-ml-source-2018.png',
'Task Factory Azure ML Source Version 2018 Task Factory Azure ML Source'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf47309ec161ccc4ae2b7ee/n/task-factory-azure-rest-source-2018.png',
'Task Factory Azure Rest Source Version 2018 Task Factory Azure Rest Source'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf47316ec161c0047e2b858/n/task-factory-azure-rest-source-format-delimited-2018.png',
'Task Factory Azure Rest Source Format Delimited Version 2018 Task Factory Azure Rest Source Format Delimited'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf4731f6e121cdc3aac1a1c/n/task-factory-azure-rest-source-format-json-2018.png',
'Task Factory Azure Rest Source Format Json Version 2018 Task Factory Azure Rest Source Format Json'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf4732dec161c3145e2b886/n/task-factory-azure-rest-source-format-xml-2018.png',
'Task Factory Azure Rest Source Format Xml Version 2018 Task Factory Azure Rest Source Format Xml'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf47341ec161c0047e2b85b/n/task-factory-azure-rest-source-output-columns-2018.png',
'Task Factory Azure Rest Source Output Columns Version 2018 Task Factory Azure Rest Source Output Columns'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf473678e121c6956ec42ed/n/task-factory-azure-rest-source-error-handling-2018.png',
'Task Factory Azure Rest Source Error Handling Version 2018 Task Factory Azure Rest Source Error Handling'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf47372ad121c6e144881ba/n/task-factory-azure-rest-source-preview-2018.png',
'Task Factory Azure Rest Source Preview Version 2018 Task Factory Azure Rest Source Preview'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bec86246e121c5079a68298/n/tf-azure-ml-destination-2018.png',
'Task Factory Azure ML Destination Version 2018 Task Factory Azure ML Destination'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf478938e121c4b5aec428d/n/task-factory-azure-rest-destination-target-2018.png',
'Task Factory Azure Rest Destination Target Version 2018 Task Factory Azure Rest Destination Target'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf4789dec161cc44de2b7e8/n/task-factory-azure-rest-destination-format-delimited-2018.png',
'Task Factory Azure Rest Destination Format Delimited Version 2018 Task Factory Azure Rest Destination Format Delimited'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf478a7ad121c1e154881b5/n/task-factory-azure-rest-destination-format-jsonarray-2018.png',
'Task Factory Azure Rest Destination Format JsonArray Version 2018 Task Factory Azure Rest Destination Format JsonArray'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5bf479168e121c4c5aec4297/n/task-factory-azure-rest-destination-format-xmlarray-2018.png',
'Task Factory Azure Rest Destination Format XmlArray Version 2018 Task Factory Azure Rest Destination Format XmlArray'],
dtype=object) ] | docs.sentryone.com |
To begin, navigate to your Telloe dashboard.
1. Click contacts
2. Click the blue "Add Contact" button.
1. Enter your contacts email address, first name and last name.
2. (Optional) add a custom invitation message - make it welcoming!
3. Enable/Disable certain events to contacts - if disabled, your contact will not be able to book the particular event with you unless changed in their Contact Settings.
4. Press the blue "Add" button. | https://docs.telloe.com/communication/add-contacts | 2021-10-15T23:48:01 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.telloe.com |
Read pixels from screen into the saved texture data.
This will copy a rectangular pixel area from the currently active RenderTexture or the view (specified by the
source parameter) into the position defined
by
destX and
destY. Both coordinates use pixel space - (0,0) is lower left.
If
recalculateMipMaps is set to true, the mip maps of the texture will also be updated. If
recalculateMipMaps is set to false, you must call Apply to recalculate them.
This function works on
RGBA32,
ARGB32 and
RGB24 texture formats, when render target is of a similar format too (e.g. usual 32 or 16 bit render texture).
Reading from a HDR render target (ARGBFloat or ARGBHalf render texture formats) into HDR texture formats (RGBAFloat or RGBAHalf) is supported too.
The texture also has to have read/write enabled flag set in the texture import settings.
// Attach this script to a Camera //Also attach a GameObject that has a Renderer (e.g. a cube) in the Display field //Press the space key in Play mode to capture
using UnityEngine;
public class Example : MonoBehaviour { // Grab the camera's view when this variable is true. bool grab;
// The "m_Display" is the GameObject whose Texture will be set to the captured image. public Renderer m_Display;
private void Update() { //Press space to start the screen grab if (Input.GetKeyDown(KeyCode.Space)) grab = true; }
private void OnPostRender() { if (grab) { //Create a new texture with the width and height of the screen Texture2D texture = new Texture2D(Screen.width, Screen.height, TextureFormat.RGB24, false); //Read the pixels in the Rect starting at 0,0 and ending at the screen's width and height texture.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0, false); texture.Apply(); //Check that the display field has been assigned in the Inspector if (m_Display != null) //Give your GameObject with the renderer this texture m_Display.material.mainTexture = texture; //Reset the grab state grab = false; } } }
See Also: EncodeToPNG. | https://docs.unity3d.com/ru/2020.2/ScriptReference/Texture2D.ReadPixels.html | 2021-10-16T00:49:51 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.unity3d.com |
You can enable or disable a web browser's access to ONTAP System Manager. You can also view the System Manager log.
You can control a web browser's access to System Manager by using vserver services web modify -name sysmgr -vserver cluster_name -enabled [true|false].
System Manager logging is recorded in the /mroot/etc/log/mlog/sysmgr.log files of the node that hosts the cluster management LIF at the time System Manager is accessed. You can view the log files by using a browser. The System Manager log is also included in AutoSupport messages. | http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-sag/GUID-73F8DC58-0D1F-4E04-AA84-6969CC6C641A.html | 2021-10-15T23:56:45 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.netapp.com |
Return the block info for this block, as provided by hook_block_info().
Return value
array: The block info.
File
- core/
modules/ layout/ includes/ block.class.inc, line 180
- A class that wraps around a block to store settings information.
Class
Code
function getBlockInfo() { $block_info = layout_get_block_info($this->module, $this->delta); // If this is a child block, merge in its child-specific data. if ($this->childDelta) { $children_blocks = $this->getChildren(); $block_info = array_merge($block_info, $children_blocks[$this->childDelta]); } return $block_info; } | https://docs.backdropcms.org/api/backdrop/core%21modules%21layout%21includes%21block.class.inc/function/Block%3A%3AgetBlockInfo/1 | 2021-10-15T23:23:28 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.backdropcms.org |
Configuring the Info tab
Use this page to enter your business address. This will be used by MemberPress to calculate any applicable tax rates and to include your business address on email receipts your customers will receive. For those reasons, it's essential that you have the information correctly entered here.
Please Note: The State* field should be the 2 Character ISO_3166-2 codes. Locate your country on this list and click on its states/provinces/territories link to get the codes:. For example: Alabama, United States would be entered as AL in the State* field as shown below:
| https://docs.memberpress.com/article/45-info | 2021-10-15T23:41:09 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/5e69687904286364bc9692dd/file-ESZOLvusQK.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/5f925608c9e77c001621aab3/file-klriwolIkV.png',
None], dtype=object) ] | docs.memberpress.com |
Settings Overview for Settings > Taxes > Tax Settings
This is the settings overview for the GetPaid > Settings > Taxes > Tax Settings page.
Tax Settings
- Enable Taxes - If checked, taxes will be enabled on invoices.
- Fallback Tax Rates - Specify a % value; customers not in any other specific tax range will be charged this rate. | https://docs.wpgetpaid.com/article/387-settings-overview-for-settings-taxes-tax-settings | 2021-10-16T00:01:49 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.wpgetpaid.com |
Azure Data Lake Storage Gen1 REST API
Use the Azure Data Lake Store REST APIs to create and manage Data Lake Store Authenticating Azure Resource Manager requests.
REST Operation Groups
Common parameters and headers
The following information is common to all tasks that you might do related to Data Lake Store:
- Store account name.
- Set the Content-Type header to application/json. Set the Authorization header to a JSON Web Token that you obtain from Azure Active Directory. For more information, see Authenticating Azure Resource Manager requests. | https://docs.microsoft.com/en-us/rest/api/datalakestore/ | 2018-11-12T22:29:40 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.microsoft.com |
Chapter 4: Bling-bling¶
As a reward for making it all the way to the end, we will help you add some fancy features to your project, otherwise known as bling and that means having to write JavaScript. Fortunately Plone comes with jQuery so we can easily integrate.
The final part of this tutorial will allow users to check and un-check items on their todo list without having to load a new page request. Note that by developing the functionality in this order, 100% of the functionality of the application remains working even when javascript is disabled. Win!
AJAX view¶
Before we add front-end bling, we need some code that can handle these requests coming in. Let’s create a simple view that will update the object in context to a new state. Go to GitHub and copy the code for WorkflowTransition class in todo.py. This class represents a view that our AJAX code will call. You can also get the code with git, however note that now we are checking out code from master, as Chapter 4 is the last chapter and its code is in the master branch.
$ git checkout master src/tutorial/todoapp/todo.py
Take a look at the WorkflowTransition class and comments around the code. There are a couple of things to point out specific to this setup:
grok.context(Container)
Tells us that this view should be called in the context of a Dexterity Container item. So if you try to go to this view from the portal root or anywhere in the site that is not a Dexterity item, Plone will return a 404 - not found error. By default all Dexterity types that you create TTW are based on the Dexterity Container base class.
grok.name('update_workflow')
This tells us on which URL the view will be available on. In this case, on <url_to_plone_content_object>/update_workflow.
def render(self):
render is a special function that must be used. It is where all of the code must go when used with grok directives. This is the main block of code that will be executed.
transition = self.request.form.get('transition', '')
self.request is set by the base class, and anything based on BrowserView will have access to this variable. All of GET/POST parameters will be stored in self.request.form.
self.request.response.setHeader( 'Content-Type', 'application/json; charset=utf-8') return json.dumps(results)
When working with JSON, it’s not required to set the header content type, but when used with certain jQuery calls it is expected to have the header set correctly. If you don’t set this, it will sometimes work and sometimes not. Get used to setting it!
Additionally, we return the result serialized as json by default. For making and testing JSON web service calls, keep in mind that they should do exactly one thing and no more. This makes it easy to integrate with Javascript and VERY easy to test. We’ll see later on how easy it is to test this view.
Furthermore, before taking the plunge to wire up JavaScript, go directly to the url and test the change. For example, if you have an item at, you can test the view by appending the view name and GET variables to the end of the item’s url. However, you first need to restart your Zope first, so your Python files get reloaded! + update_workflow?transition = complete
For extra clarity: if you are not an expert in python, plone, AND javascript, I highly recommend integrating bling bling in the following order:
- Write base view and passing test cases
- Test views in browser
- Make ajax interactive
Starting with bling from the start will only bring you pain.
Custom JavaScript¶
Now that we know the update_workflow view is working, let’s add some AJAX handling on the top of it. Checkout the Javascript file and a JavaScript registry file into your working directory:
git checkout master src/tutorial/todoapp/static/todoapp.js git checkout master src/tutorial/todoapp/profiles/default/jsregistry.xml
jsregistry.xml contains all configuration needed to tell Plone how it should register and use our JavaScript. It has a lot of options that are pretty self explanatory (if you think like a machine).
Trying it out!¶
Holy moley you made it! Restart Zope (to reload Python files), reactivate the product (to reimport XML files), do a hard reload in your web browser (to clear any caches) and check out your todo list. The todo items should toggle between complete and incomplete without the page reloading. Sweet!
Tests¶
As always, let’s add tests! First add the following snippet to test_setup to verify that your JavaScript is registered in Plone.
# jsregistry.xml def test_js_registered(self): """Test that todoapp.js file is registered in portal_javascript.""" resources = self.portal.portal_javascripts.getResources() ids = [r.getId() for r in resources] self.assertIn('++resource++tutorial.todoapp/todoapp.js', ids)
Lastly, add a new test module: test_workflow.py. Download it from GitHub, put and it in your tests folder and run tests. Then fiddle around with it to see what it does. As always, you can use git to get the file.
$ git checkout master src/tutorial/todoapp/tests/test_workflow.py
The end¶
This concludes the Todo app in Plone tutorial. Congratulations! Now it’s time to checkout other tutorials and documentation available on developer.plone.org!
Troubleshooting¶
If something goes wrong you can always go to GitHub and see how the code in master should look like and compare this to what you have on your local machine. | https://tutorialtodoapp.readthedocs.io/en/latest/chapter_4.html | 2018-11-12T23:06:17 | CC-MAIN-2018-47 | 1542039741151.56 | [array(['_images/ajax_call.jpg', '_images/ajax_call.jpg'], dtype=object)] | tutorialtodoapp.readthedocs.io |
If you are developing your application using Visual LANSA with a IBM i Master Repository, you will need to transfer data from one platform to another. In an IBM i Master system, the definitions required to run the Host Monitor and to export using the PC export can be created using the LANSA REQUEST(PCMAINT), or they can be created automatically by performing System Initialization or Partition LANSA/SuperServer which have their own tables.
Refer to Create PC Definitions and Change PC Definitions in the LANSA for i User Guide for further details. | https://docs.lansa.com/14/en/lansa070/content/lansa/ladmulb7_0050.htm | 2018-11-12T21:57:22 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.lansa.com |
Web
Part
Web Tracker Part
Web Tracker Part
Web Tracker Part
Class
Tracker
Definition
Monitors Web Parts connections for circular connections.
public ref class WebPartTracker sealed : IDisposable
public sealed class WebPartTracker : IDisposable
type WebPartTracker = class interface IDisposable
Public NotInheritable Class WebPartTracker Implements IDisposable
- Inheritance
-
- Implements
-
Remarks. | https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.webcontrols.webparts.webparttracker?view=netframework-4.7.2 | 2018-11-12T22:34:23 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.microsoft.com |
Assigning conversations
Assigning. Let's check out the different ways:
Within a conversation
Click on the Assign icon at the top of the conversation, and a dropdown menu will appear of all of your Users in Help Scout. Select any User and it will automatically assign the conversation to them.
Within a folder
Select any conversation in a folder, and a floating menu will appear. Click on the Assign icon, and select any User from the dropdown menu that you'd like to assign it to.
When replying or adding a note
You can choose a User to assign a conversation to right before you send a reply or add a note to the conversation. This will assign the conversation to that User upon sending the reply or adding the note.section, and it will run whenever a conversation meets those conditions.
For a primer on workflows, check out this article.
Viewing assigned conversations
Anyone with access to a mailbox, regardless of their role, can view assigned conversations and also change the assignee on a conversation. trail entry of who assigned a conversation to whom, and when it happened. It will appear as a grey line item, like this:
Searching and filtering by assignee
Running a search by assignee will show all of their conversations across different mailboxes and the entire account.. | http://docs.helpscout.net/article/842-assigning-conversations | 2017-06-22T20:23:10 | CC-MAIN-2017-26 | 1498128319902.52 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/58c0c0822c7d3a576d35cdc7/file-xyV1fGEINC.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/58c0c13f2c7d3a576d35cdc8/file-q7SJHlpEEk.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/58c0c2732c7d3a576d35cdca/file-uiM8QhsxUv.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/58c0c41edd8c8e56bfa84eb9/file-7OACdBuM0A.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/58c0e31ddd8c8e56bfa84efc/file-gBP7kbRHrP.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/58c0e0652c7d3a576d35ce13/file-t2ws9mijki.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/57f34a369033602e61d4b208/file-lmq4piZUJY.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/58c0eddbdd8c8e56bfa84f19/file-I5GPEZYsK9.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/57ccee4cc69791083999feb5/file-H7AeCGhF9u.png',
None], dtype=object) ] | docs.helpscout.net |
Cucumber HTML Formatter
The Cucumber HTML Formatter renders Gherkin documents as HTML.
It can optionally render extra information such as Cucumber results, stack traces, screenshots, Gherkin-Lint results or any other information that can be embedded in the Cucumber Event Protocol.
bin/cucumber-html-formatter is an executable that reads events and outputs HTML.
Events can be read from
STDIN or a TCP socket.
HTML can be output to
STDOUT, a specified directory or directly to a browser.
For more details, see the technical documentation. | https://docs.cucumber.io/html-formatter/ | 2017-06-22T20:25:06 | CC-MAIN-2017-26 | 1498128319902.52 | [] | docs.cucumber.io |
Dependency Injection
Dependency Injection is a specific usage of the larger Inversion of Control concept.:
- Dependency Injection is an important pattern for creating classes that are easier to unit test in isolation
- Promotes loose coupling between classes and subsystems
- Adds potential flexibility to a codebase for future changes
- Can enable better code reuse
- The implementation is simple and does *not* require a fancy DI tool:
- Model – Whatever business object/DataSet/chunk of data is being displayed or edited
- View – A WinForms UserControl class. Displays data to a user and captures user input and screen events (duh).
- Service – A web service proxy class to send requests to the backend
- Presenter – The controller class that coordinates all of the above.UserC – Attach the dependencies through a constructor function at object creation
- Setter Injection – Attach the dependencies through setter properties
- Interface Injection – This is an odd duck. I’ve never used it or seen this used. I suspect its usage is driven by specific DI tools in the Java world.
- Service Locator – Use a well known class that knows how to retrieve and create dependencies. Not technically DI, but this is what most DI/IoC container tools really do.
Constructor Injection
My preference is to use the “Constructor Injection” flavor of DI. The mechanism here is pretty simple; just push the dependencies in through the constructor function.
public class Presenter { private IView _view; private Model _model; private IService _service; public Presenter(IView view, IService service) { _view = view; _service = service; } public Presenter(IView view, IService service) { _view = view; _service = service; } // class Presenter { private IView _view; private Model _model; private IService _service; public Presenter() { _view = new View(); _service = new Service(); } public IView View { get { return _view; } set { _view = value; } } public IService Service { get { return _service; } set { _service = value; } } public IView View { get { if (_view == null) { _view = new View(); } return _view; } set { _view = value; } }.
public class Presenter { private IView _view; private Model _model; private IService _service; public Presenter() { // Call to StructureMap to fetch the default configurations of IView and IService _view = (IView) StructureMap.ObjectFactory.GetInstance(typeof(IView)); _service = (IService) StructureMap.ObjectFactory.GetInstance(typeof(IService)); } public object CreateView(Model model){…} public void Close(){…} public void Save(){…} }IV.
Links
- The canonical article on Dependency Injection is from Martin Fowler.
- I have some other information on the StructureMap website
- has some good information
- Griffin Caprio (of Spring.Net) on MSDN Magazine
- J.B. Rainsberger in Better Software | http://docs.structuremap.net/software-design-concepts/dependency-injection/ | 2017-06-22T20:30:00 | CC-MAIN-2017-26 | 1498128319902.52 | [] | docs.structuremap.net |
HTTP Status Codes
The Wikipedia list of HTTP Status Codes is a good resource, listing the ~75 or so codes – both standards and “well known” usages.
With one byte, the 16x16 grid has lots of room to have semantics that are richer than those in HTTP.
HTTP Status Codes were defined as a section of the HTTP/1.1 RFC2616.
The W3C has a page which splits out the Status Code Definitions.
Today, HTTP Status Codes are split across multiple RFCs, with some status codes having their own RFC, e.g. RFC7538 for status code 308, Permanent Redirect.
The Mozilla Developer Network explains
418 I'm a teapot:
The HTTP
418 I'm a teapotclient error response code indicates that the server refuses to brew coffee because it is a teapot. This error is a reference of Hyper Text Coffee Pot Control Protocol which was an April Fools’ joke in 1998.
A useful resource for browsing the codes is the httpstatuses.com site. | https://docs.fission.codes/fission-codes/httpstatuscodes/ | 2020-01-18T04:44:10 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.fission.codes |
Control.
On Help Requested(HelpEventArgs) Method
Definition
Raises the HelpRequested event.
protected: virtual void OnHelpRequested(System::Windows::Forms::HelpEventArgs ^ hevent);
protected virtual void OnHelpRequested (System.Windows.Forms.HelpEventArgs hevent);
abstract member OnHelpRequested : System.Windows.Forms.HelpEventArgs -> unit override this.OnHelpRequested : System.Windows.Forms.HelpEventArgs -> unit
Protected Overridable Sub OnHelpRequested (hevent As HelpEventArgs)
Parameters
- hevent
- HelpEventArgs
A HelpEventArgs that contains the event data.
Remarks(HelpEventArgs) in a derived class, be sure to call the base class's OnHelpRequested(HelpEventArgs) method so that registered delegates receive the event. | https://docs.microsoft.com/ar-sa/dotnet/api/system.windows.forms.control.onhelprequested?view=netframework-4.8 | 2020-01-18T03:03:08 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.microsoft.com |
TOPICS×
Download report suite settings
Steps that describe how to generate an Excel spreadsheet containing all the settings for the selected report suite.
- Click Admin > Report Suites .
- Select a report suite from the Report Suite table.
- Click Download .You can open the spreadsheet file directly, or save it for viewing. | https://docs.adobe.com/content/help/en/analytics/admin/manage-report-suites/t-download-rs-settings.html | 2020-01-18T02:53:47 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.adobe.com |
Gets or sets whether end-users can change column widths by dragging the edges of their headers.
Namespace: DevExpress.UI.Xaml.Grid
Assembly: DevExpress.UI.Xaml.Grid.v19.2.dll
[XtraSerializableProperty]
public bool AllowResizing { get; set; }
<XtraSerializableProperty>
Public Property AllowResizing As Boolean
true to allow end-users to change column widths; otherwise, false.
End-users are allowed to resize columns if the view's AllowResizing property is set to true. Individual columns provide the ColumnBase.AllowResizing property, allowing the default behavior specified by the grid to be overridden. Setting this property to 'True' or 'False' overrides the default behavior. This can be useful when it is required to prevent an end-user from resizing individual columns.
If the column's ColumnBase.AllowResizing property is set to 'Default', the ability to resize columns is controlled by the grid's AllowResizing property. In this instance, to obtain whether an end-user can resize a column, use the ColumnBase.ActualAllowResizing property. | https://docs.devexpress.com/Win10Apps/DevExpress.UI.Xaml.Grid.GridControl.AllowResizing | 2020-01-18T02:37:03 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.devexpress.com |
Managed Service Accounts (MSA)¶
Using a MSA takes five steps:
If using a workstation: Add-WindowsFeature -Name RSAT-AD-Powershell
- Add Key Distribution Center Root Key (one time operation per domain)
- You create the MSA in AD.
- You associate the MSA with a computer in AD.
- You install the MSA on the computer that was associated.
- You configure the service(s) to use the MSA.
1. KDCRootKey¶
2. Account Creation¶
3. Account Association¶
4. Account Installation¶
5. Service Configuration¶
You configure the MSA as you would configure any virtual service account (eg. DOMAIN\ServiceAccount$), without specifying a password.
Group Managed Service Accounts (gMSA)¶
gMSA behave just like a MSA. The primary difference is that you can associate further devices with the account, not just a single device. You do so by allowing the device access and then repeating the association process on each endpoint you want to be associated with the gMSA. | https://docs.itops.pt/Technologies/ADDS/ManagedServiceAccounts/ | 2020-01-18T03:10:15 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.itops.pt |
Khadas VIM3/VIM3L contains a 16 MB SPI-Flash that’s used as boot storage; so you can boot from it. This guide is about how to boot from the on-board SPI-Flash.
Build U-boot For SPI-Flash
The U-Boot for SPI-Flash is the same as eMMC U-Boot. We recommend using Fenix Script to build U-Boot, as it’s easy this way.
This guide assumes that you have already setup a basic build environment. If not, please refer to Fenix Usage.
- Setup Environment:
VIM3 or
VIM3L board(This is according to your board).
- Build U-boot
If successful, you will get a U-Boot for the SPI-Flash
u-boot.bin, in the directory
fenix/u-boot/build.
Burn U-boot To SPI Flash
Copy
u-boot.bin to an SD-Card or Thumbdrive (U-Disk) and insert it into your board or load it via TFTP.
Setup serial debugging tool and boot to the U-Boot Command Line.
Load U-boot to DDR
- Load U-Boot from SD-Card:
- Load U-Boot from Thumbdrive (U-Disk):
- Load U-boot via TFTP
Please refer here about how to setup the TFTP.
Burning
Tip: This will take a few seconds, please be patient.
Setup bootmode to SPI
If you want to boot from SPI Flash, you have to setup the bootmode to SPI. The default bootmode is boot from eMMC.
- Check current bootmode:
Current bootmode is boot from eMMC.
- Setup bootmode to SPI:
Poweroff the system to make it available:
Press the
POWER key to bootup, you will boot from the SPI-Flash.
Erase the SPI Flash to prevent boot from it
Troubleshooting
Bootmode is boot from SPI, but the u-boot in SPI flash is corrupted, can’t enter u-boot command line.
1) If u-boot in eMMC is correct, you can try TST mode or try SPI MASKROM to boot from eMMC, then enter u-boot command line, erase the SPI flash or burn the new u-boot to SPI flash.
Note: Don’t use your PC to supply the power, or you will enter usb burning mode!
2) U-boot in eMMC is also corrupted, you have to try TST mode to enter usb burning mode, and flash the image to emmc, then follow
step 1).
Note: You need to connect the board to your host PC! | https://docs.khadas.com/vim3/BootFromSpiFlash.html | 2020-01-18T03:51:27 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.khadas.com |
Bayesian Neural Networks¶
HiddenLayer¶
- class
HiddenLayer(X=None, A_mean=None, A_scale=None, non_linearity=<function relu>, KL_factor=1.0, A_prior_scale=1.0, include_hidden_bias=True, weight_space_sampling=False)[source]¶
This distribution is a basic building block in a Bayesian neural network. It represents a single hidden layer, i.e. an affine transformation applied to a set of inputs X followed by a non-linearity. The uncertainty in the weights is encoded in a Normal variational distribution specified by the parameters A_scale and A_mean. The so-called ‘local reparameterization trick’ is used to reduce variance (see reference below). In effect, this means the weights are never sampled directly; instead one samples in pre-activation space (i.e. before the non-linearity is applied). Since the weights are never directly sampled, when this distribution is used within the context of variational inference, care must be taken to correctly scale the KL divergence term that corresponds to the weight matrix. This term is folded into the log_prob method of this distributions.
In effect, this distribution encodes the following generative process:
A ~ Normal(A_mean, A_scale) output ~ non_linearity(AX)
Reference:
Kingma, Diederik P., Tim Salimans, and Max Welling. “Variational dropout and the local reparameterization trick.” Advances in Neural Information Processing Systems. 2015. | http://docs.pyro.ai/en/dev/contrib.bnn.html | 2020-01-18T04:42:35 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.pyro.ai |
Arguments passed to the ModuleBase.CustomizeLogics method.
Namespace: DevExpress.ExpressApp.DC
Assembly:
DevExpress.ExpressApp.v19.2.dll
public sealed class CustomLogics
Public NotInheritable Class CustomLogics
The ModuleBase.CustomizeLogics method allows you to replace the default domain logic implementations used for the Application Model interfaces with custom ones. For this purpose, a CustomLogics object, exposed by the method's customLogics parameter, supplies the following methods.
To see an example of using the CustomLogics class' methods, refer to the ModuleBase.CustomizeLogics method description. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.DC.CustomLogics | 2020-01-18T02:37:56 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.devexpress.com |
Pew Pew map four columns with numeric values. Furthermore, to show meaningful content on the map you need two columns with values that correspond to longitude and latitude..
You must also group your data and add an aggregation to have suitable data for the map.
- Select Additional tools → Charts → Maps → Pew Pew map from the toolbar.
Click and drag the column headers to the corresponding fields. This chart requires you to select five fields:
- The Pew Pew map is displayed.
Tips
Hover over an area on the map to see at the bottom the value and position in the color range.
Shortcut keys
You can hit the following keys to perform different visualization actions: | https://docs.devo.com/confluence/ndt/searching-data/working-in-the-search-window/generate-charts/pew-pew-map | 2020-01-18T03:35:19 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.devo.com |